Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
DOT National Transportation Integrated Search
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fujarewicz, Krzysztof; Lakomiec, Krzysztof
2016-12-01
We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2017-08-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
Integrating heterogeneous drug sensitivity data from cancer pharmacogenomic studies.
Pozdeyev, Nikita; Yoo, Minjae; Mackie, Ryan; Schweppe, Rebecca E; Tan, Aik Choon; Haugen, Bryan R
2016-08-09
The consistency of in vitro drug sensitivity data is of key importance for cancer pharmacogenomics. Previous attempts to correlate drug sensitivities from the large pharmacogenomics databases, such as the Cancer Cell Line Encyclopedia (CCLE) and the Genomics of Drug Sensitivity in Cancer (GDSC), have produced discordant results. We developed a new drug sensitivity metric, the area under the dose response curve adjusted for the range of tested drug concentrations, which allows integration of heterogeneous drug sensitivity data from the CCLE, the GDSC, and the Cancer Therapeutics Response Portal (CTRP). We show that there is moderate to good agreement of drug sensitivity data for many targeted therapies, particularly kinase inhibitors. The results of this largest cancer cell line drug sensitivity data analysis to date are accessible through the online portal, which serves as a platform for high power pharmacogenomics analysis.
NASA Astrophysics Data System (ADS)
Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.
2018-04-01
Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Pain sensitivity profiles in patients with advanced knee osteoarthritis
Frey-Law, Laura A.; Bohr, Nicole L.; Sluka, Kathleen A.; Herr, Keela; Clark, Charles R.; Noiseux, Nicolas O.; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A.
2016-01-01
The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision-making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined if these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed prior to total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals that exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in four pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in five pain sensitivity profiles: a “low pressure pain” group, an “average pain” group, and three “high pain” sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however no differences in OA grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Further, these profiles are surprisingly similar to those reported in healthy populations suggesting that individual differences in pain sensitivity are a robust finding even in an older population with significant disease. PMID:27152688
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Carmichael, Marc G; Liu, Dikai
2015-01-01
Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
NASA Technical Reports Server (NTRS)
Traversi, M.
1979-01-01
Data are presented on the sensitivity of: (1) mission analysis results to the boundary values given for number of passenger cars and average annual vehicle miles traveled per car; (2) vehicle characteristics and performance to specifications; and (3) tradeoff study results to the expected parameters.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
2D Decision-Making for Multi-Criteria Design Optimization
2006-05-01
participating in the same subproblem, information on the tradeoffs between different subproblems is obtained from a sensitivity analysis and used for...accomplished by some other mechanism. For the coordination between subproblem, we use the lexicographical ordering approach for multicriteria ...Sensitivity analysis Our approach uses sensitivity results from nonlinear programming (Fiacco, 1983; Luenberger, 2003), for which we first
Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing
NASA Astrophysics Data System (ADS)
Lin, Psang Dain; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing
NASA Astrophysics Data System (ADS)
Dain Lin, Psang; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
Bell, L T O; Gandhi, S
2018-06-01
To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Integrated Data Collection Analysis (IDCA) Program — Bullseye ® Smokeless Powder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
2013-05-30
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of Bullseye ® smokeless powder (Gunpowder). The participants found the Gunpowder: 1) to have a range of sensitivity to impact, from less than RDX to almost as sensitive as PETN, 2) to be moderately sensitive to BAM and ABL friction, 3) have a range for ESD, from insensitive to more sensitive than PETN, and 4) to have thermal sensitivity about themore » same as PETN and RDX.« less
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
NASA Astrophysics Data System (ADS)
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter
2017-01-01
Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.
Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences
Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid
2017-01-01
Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Integrated Data Collection Analysis (IDCA) Program - NaClO 3/Icing Sugar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of NaClO 3 and icing sugar—NaClO 3/icing sugar mixture. The mixture was found to: be more sensitive than RDX but less sensitive than PETN in impact testing (180-grit sandpaper); be more sensitive than RDX and about the same sensitivity as PETN in BAM fiction testing; be less sensitive than RDX and PETN except for one participant found themore » mixture more sensitive than PETN in ABL ESD testing; and to have one to three exothermic features with the lowest temperature event occurring at ~ 160°C always observed in thermal testing. Variations in testing parameters also affected the sensitivity.« less
Development of a sensitivity analysis technique for multiloop flight control systems
NASA Technical Reports Server (NTRS)
Vaillard, A. H.; Paduano, J.; Downing, D. R.
1985-01-01
This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.
Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Williams, Mark L
2007-01-01
Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel
2017-04-01
Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Sensitivity study on durability variables of marine concrete structures
NASA Astrophysics Data System (ADS)
Zhou, Xin'gang; Li, Kefei
2013-06-01
In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.
A Multi-Objective Decision-Making Model for Resources Allocation in Humanitarian Relief
2007-03-01
Applied Mathematics and Computation 163, 2005, pp756 19. Malczewski, J., GIS and Multicriteria Decision Analysis , John Wiley and Sons, New York... used when interpreting the results of the analysis . (Raimo et al. 2002) (7) Sensitivity analysis Sensitivity analysis in a DA process answers...Budget Scenario Analysis The MILP is solved ( using LINDO 6.1) for high, medium and low budget scenarios in both damage degree levels. Tables 17 and
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eno, L.; Rabitz, H.
1981-08-15
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko Murakami; Finsterle, Stefan
2016-07-15
This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.
Ingalls, Brian; Mincheva, Maya; Roussel, Marc R
2017-07-01
A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.
Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J
2014-12-01
This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
Assessment of energy and economic performance of office building models: a case study
NASA Astrophysics Data System (ADS)
Song, X. Y.; Ye, C. T.; Li, H. S.; Wang, X. L.; Ma, W. B.
2016-08-01
Energy consumption of building accounts for more than 37.3% of total energy consumption while the proportion of energy-saving buildings is just 5% in China. In this paper, in order to save potential energy, an office building in Southern China was selected as a test example for energy consumption characteristics. The base building model was developed by TRNSYS software and validated against the recorded data from the field work in six days out of August-September in 2013. Sensitivity analysis was conducted for energy performance of building envelope retrofitting; five envelope parameters were analyzed for assessing the thermal responses. Results indicated that the key sensitivity factors were obtained for the heat-transfer coefficient of exterior walls (U-wall), infiltration rate and shading coefficient (SC), of which the sum sensitivity factor was about 89.32%. In addition, the results were evaluated in terms of energy and economic analysis. The analysis of sensitivity validated against some important results of previous studies. On the other hand, the cost-effective method improved the efficiency of investment management in building energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Sensitivity Analysis of Launch Vehicle Debris Risk Model
NASA Technical Reports Server (NTRS)
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.
Fish oil supplementation and insulin sensitivity: a systematic review and meta-analysis.
Gao, Huanqing; Geng, Tingting; Huang, Tao; Zhao, Qinghua
2017-07-03
Fish oil supplementation has been shown to be associated with a lower risk of metabolic syndrome and benefit a wide range of chronic diseases, such as cardiovascular disease, type 2 diabetes and several types of cancers. However, the evidence of fish oil supplementation on glucose metabolism and insulin sensitivity is still controversial. This meta-analysis summarized the exist evidence of the relationship between fish oil supplementation and insulin sensitivity and aimed to evaluate whether fish oil supplementation could improve insulin sensitivity. We searched the Cochrane Library, PubMed, Embase database for the relevant studies update to Dec 2016. Two researchers screened the literature independently by the selection and exclusion criteria. Studies were pooled using random effect models to estimate a pooled SMD and corresponding 95% CI. This meta-analysis was performed by Stata 13.1 software. A total of 17 studies with 672 participants were included in this meta-analysis study after screening from 498 published articles found after the initial search. In a pooled analysis, fish oil supplementation had no effects on insulin sensitivity compared with the placebo (SMD 0.17, 95%CI -0.15 to 0.48, p = 0.292). In subgroup analysis, fish oil supplementation could benefit insulin sensitivity among people who were experiencing at least one symptom of metabolic disorders (SMD 0.53, 95% CI 0.17 to 0.88, p < 0.001). Similarly, there were no significant differences between subgroups of methods of insulin sensitivity, doses of omega-3 polyunsaturated fatty acids (n-3 PUFA) of fish oil supplementation or duration of the intervention. The sensitivity analysis indicated that the results were robust. Short-term fish oil supplementation is associated with increasing the insulin sensitivity among those people with metabolic disorders.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R
2011-01-01
The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.
Multicultural Experience and Intercultural Sensitivity among South Korean Adolescents
ERIC Educational Resources Information Center
Park, Jung-Suh
2013-01-01
This study examined experience with multicultural contact and the intercultural sensitivity of majority adolescents in South Korean society, one that is rapidly shifting toward a more multicultural environment. It also analyzed the influence of these multicultural experiences on intercultural sensitivity. The results of the analysis revealed a…
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
Anxiety Sensitivity and the Anxiety Disorders: A Meta-Analytic Review and Synthesis
ERIC Educational Resources Information Center
Olatunji, Bunmi O.; Wolitzky-Taylor, Kate B.
2009-01-01
There has been significant interest in the role of anxiety sensitivity (AS) in the anxiety disorders. In this meta-analysis, we empirically evaluate differences in AS between anxiety disorders, mood disorders, and nonclinical controls. A total of 38 published studies (N = 20,146) were included in the analysis. The results yielded a large effect…
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Macdonald, R. B.; Hall, F. G.; Carnes, J. G.
1986-01-01
Results from analysis of a data set of simultaneous measurements of Thematic Mapper band reflectance and leaf area index are presented. The measurements were made over pure stands of Aspen in the Superior National Forest of northern Minnesota. The analysis indicates that the reflectance may be sensitive to the leaf area index of the Aspen early in the season. The sensitivity disappears as the season progresses. Based on the results of model calculations, an explanation for the observed relationship is developed. The model calculations indicate that the sensitivity of the reflectance to the Aspen overstory depends on the amount of understory present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Sensitivity analysis of add-on price estimate for select silicon wafering technologies
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1982-01-01
The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Hsu, Chung-Jen; Jones, Elizabeth G
2017-02-01
This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Podgornova, O.; Leaney, S.; Liang, L.
2018-07-01
Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.
Characterization of Homopolymer and Polymer Blend Films by Phase Sensitive Acoustic Microscopy
NASA Astrophysics Data System (ADS)
Ngwa, Wilfred; Wannemacher, Reinhold; Grill, Wolfgang
2003-03-01
CHARACTERIZATION OF HOMOPOLYMER AND POLYMER BLEND FILMS BY PHASE SENSITIVE ACOUSTIC MICROSCOPY W Ngwa, R Wannemacher, W Grill Institute of Experimental Physics II, University of Leipzig, 04103 Leipzig, Germany Abstract We have used phase sensitive acoustic microscopy (PSAM) to study homopolymer thin films of polystyrene (PS) and poly (methyl methacrylate) (PMMA), as well as PS/PMMA blend films. We show from our results that PSAM can be used as a complementary and highly valuable technique for elucidating the three-dimensional (3D) morphology and micromechanical properties of thin films. Three-dimensional image acquisition with vector contrast provides the basis for: complex V(z) analysis (per image pixel), 3D image processing, height profiling, and subsurface image analysis of the polymer films. Results show good agreement with previous studies. In addition, important new information on the three dimensional structure and properties of polymer films is obtained. Homopolymer film structure analysis reveals (pseudo-) dewetting by retraction of droplets, resulting in a morphology that can serve as a starting point for the analysis of polymer blend thin films. The outcome of confocal laser scanning microscopy studies, performed on the same samples are correlated with the obtained results. Advantages and limitations of PSAM are discussed.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dormody, M.; Johnson, R. P.; Atwood, W. B.
2011-12-01
We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.
Gupta, Omesh P; Brown, Gary C; Brown, Melissa M
2008-05-01
To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS
NASA Technical Reports Server (NTRS)
McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2015-01-01
The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance
2015-01-01
Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553
Santurtún, Ana; Riancho, José A; Arozamena, Jana; López-Duarte, Mónica; Zarrabeitia, María T
2017-01-01
Several methods have been developed to determinate genetic profiles from a mixed samples and chimerism analysis in transplanted patients. The aim of this study was to explore the effectiveness of using the droplet digital PCR (ddPCR) for mixed chimerism detection (a mixture of genetic profiles resulting after allogeneic hematopoietic stem cell transplantation (HSCT)). We analyzed 25 DNA samples from patients who had undergone HSCT and compared the performance of ddPCR and two established methods for chimerism detection, based upon the Indel and STRs analysis, respectively. Additionally, eight artificial mixture DNA samples were created to evaluate the sensibility of ddPCR. Our results show that the chimerism percentages estimated by the analysis of a single Indel using ddPCR were very similar to those calculated by the amplification of 15 STRs (r 2 = 0.970) and with the results obtained by the amplification of 38 Indels (r 2 = 0.975). Moreover, the amplification of a single Indel by ddPCR was sensitive enough to detect a minor DNA contributor comprising down to 0.5 % of the sample. We conclude that ddPCR can be a powerful tool for the determination of a genetic profile of forensic mixtures and clinical chimerism analysis when traditional techniques are not sensitive enough.
Rine, J.M.; Shafer, J.M.; Covington, E.; Berg, R.C.
2006-01-01
Published information on the correlation and field-testing of the technique of stack-unit/aquifer sensitivity mapping with documented subsurface contaminant plumes is rare. The inherent characteristic of stack-unit mapping, which makes it a superior technique to other analyses that amalgamate data, is the ability to deconstruct the sensitivity analysis on a unit-by-unit basis. An aquifer sensitivity map, delineating the relative sensitivity of the Crouch Branch aquifer of the Administrative/Manufacturing Area (A/M) at the Savannah River Site (SRS) in South Carolina, USA, incorporates six hydrostratigraphic units, surface soil units, and relevant hydrologic data. When this sensitivity map is compared with the distribution of the contaminant tetrachloroethylene (PCE), PCE is present within the Crouch Branch aquifer within an area classified as highly sensitive, even though the PCE was primarily released on the ground surface within areas classified with low aquifer sensitivity. This phenomenon is explained through analysis of the aquifer sensitivity map, the groundwater potentiometric surface maps, and the plume distributions within the area on a unit-by- unit basis. The results of this correlation show how the paths of the PCE plume are influenced by both the geology and the groundwater flow. ?? Springer-Verlag 2006.
Sun, Jiahong; Zhao, Min; Miao, Song; Xi, Bo
2016-01-01
Many studies have suggested that polymorphisms of three key genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system (RAAS) play important roles in the development of blood pressure (BP) salt sensitivity, but they have revealed inconsistent results. Thus, we performed a meta-analysis to clarify the association. PubMed and Embase databases were searched for eligible published articles. Fixed- or random-effect models were used to pool odds ratios and 95% confidence intervals based on whether there was significant heterogeneity between studies. In total, seven studies [237 salt-sensitive (SS) cases and 251 salt-resistant (SR) controls] for ACE gene I/D polymorphism, three studies (130 SS cases and 221 SR controls) for AGT gene M235T polymorphism and three studies (113 SS cases and 218 SR controls) for CYP11B2 gene C344T polymorphism were included in this meta-analysis. The results showed that there was no significant association between polymorphisms of these three polymorphisms in the RAAS and BP salt sensitivity under three genetic models (all p > 0.05). The meta-analysis suggested that three polymorphisms (ACE gene I/D, AGT gene M235T, CYP11B2 gene C344T) in the RAAS have no significant effect on BP salt sensitivity.
NASA Astrophysics Data System (ADS)
Bukreeva, Ekaterina B.; Bulanova, Anna A.; Kistenev, Yury V.; Kuzmin, Dmitry A.; Tuzikov, Sergei A.; Yumov, Evgeny L.
2014-11-01
The results of the joint use of laser photoacoustic spectroscopy and chemometrics methods in gas analysis of exhaled air of patients with respiratory diseases (chronic obstructive pulmonary disease, pneumonia and lung cancer) are presented. The absorption spectra of exhaled breath of all volunteers were measured, the classification methods of the scans of the absorption spectra were applied, the sensitivity/specificity of the classification results were determined. It were obtained a result of nosological in pairs classification for all investigated volunteers, indices of sensitivity and specificity.
2014-01-01
Background Due to the recent European legislations posing a ban of animal tests for safety assessment within the cosmetic industry, development of in vitro alternatives for assessment of skin sensitization is highly prioritized. To date, proposed in vitro assays are mainly based on single biomarkers, which so far have not been able to classify and stratify chemicals into subgroups, related to risk or potency. Methods Recently, we presented the Genomic Allergen Rapid Detection (GARD) assay for assessment of chemical sensitizers. In this paper, we show how the genome wide readout of GARD can be expanded and used to identify differentially regulated pathways relating to individual chemical sensitizers. In this study, we investigated the mechanisms of action of a range of skin sensitizers through pathway identification, pathway classification and transcription factor analysis and related this to the reactive mechanisms and potency of the sensitizing agents. Results By transcriptional profiling of chemically stimulated MUTZ-3 cells, 33 canonical pathways intimately involved in sensitization to chemical substances were identified. The results showed that metabolic processes, cell cycling and oxidative stress responses are the key events activated during skin sensitization, and that these functions are engaged differently depending on the reactivity mechanisms of the sensitizing agent. Furthermore, the results indicate that the chemical reactivity groups seem to gradually engage more pathways and more molecules in each pathway with increasing sensitizing potency of the chemical used for stimulation. Also, a switch in gene regulation from up to down regulation, with increasing potency, was seen both in genes involved in metabolic functions and cell cycling. These observed pathway patterns were clearly reflected in the regulatory elements identified to drive these processes, where 33 regulatory elements have been proposed for further analysis. Conclusions This study demonstrates that functional analysis of biomarkers identified from our genomics study of human MUTZ-3 cells can be used to assess sensitizing potency of chemicals in vitro, by the identification of key cellular events, such as metabolic and cell cycling pathways. PMID:24517095
A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element
Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili
2016-01-01
This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
dos Santos, Marcelo R.; Sayegh, Ana L.C.; Armani, Rafael; Costa-Hong, Valéria; de Souza, Francis R.; Toschi-Dias, Edgar; Bortolotto, Luiz A.; Yonamine, Mauricio; Negrão, Carlos E.; Alves, Maria-Janieire N.N.
2018-01-01
OBJECTIVES: Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. METHODS: Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. RESULTS: Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. CONCLUSIONS: Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population. PMID:29791601
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
NASA Technical Reports Server (NTRS)
Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.
1975-01-01
The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.
Prevalence of potent skin sensitizers in oxidative hair dye products in Korea.
Kim, Hyunji; Kim, Kisok
2016-09-01
The objective of the present study was to elucidate the prevalence of potent skin sensitizers in oxidative hair dye products manufactured by Korean domestic companies. A database on hair dye products made by domestic companies and selling in the Korean market in 2013 was used to obtain information on company name, brand name, quantity of production, and ingredients. The prevalence of substances categorized as potent skin sensitizers was calculated using the hair dye ingredient database, and the pattern of concomitant presence of hair dye ingredients was analyzed using network analysis software. A total of 19 potent skin sensitizers were identified from a database that included 99 hair dye products manufactured by Korean domestic companies. Among 19 potent skin sensitizers, the four most frequent were resorcinol, m-aminophenol, p-phenylenediamine (PPD), and p-aminophenol; these four skin-sensitizing ingredients were found in more than 50% of the products studied. Network analysis showed that resorcinol, m-aminophenol, and PPD existed together in many hair dye products. In 99 products examined, the average product contained 4.4 potent sensitizers, and 82% of the products contained four or more skin sensitizers. The present results demonstrate that oxidative hair dye products made by Korean domestic manufacturers contain various numbers and types of potent skin sensitizers. Furthermore, these results suggest that some hair dye products should be used with caution to prevent adverse effects on the skin, including allergic contact dermatitis.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao
2011-01-01
Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of PETN Class 4. The PETN was found to have: 1) an impact sensitivity (DH 50) range of 6 to 12 cm, 2) a BAM friction sensitivity (F 50) range 7 to 11 kg, TIL (0/10) of 3.7 to 7.2 kg, 3) a ABL friction sensitivity threshold of 5 or less psig at 8 fps, 4) an ABL ESD sensitivity thresholdmore » of 0.031 to 0.326 j/g, and 5) a thermal sensitivity of an endothermic feature with T min = ~ 141 °C, and a exothermic feature with a T max = ~205°C.« less
ERIC Educational Resources Information Center
Angrist, Joshua; Pischke, Jorn-Steffen
2010-01-01
This essay reviews progress in empirical economics since Leamer'rs (1983) critique. Leamer highlighted the benefits of sensitivity analysis, a procedure in which researchers show how their results change with changes in specification or functional form. Sensitivity analysis has had a salutary but not a revolutionary effect on econometric practice.…
Sensitivity of VIIRS Polarization Measurements
NASA Technical Reports Server (NTRS)
Waluschka, Eugene
2010-01-01
The design of an optical system typically involves a sensitivity analysis where the various lens parameters, such as lens spacing and curvatures, to name two parameters, are (slightly) varied to see what, if any, effect this has on the performance and to establish manufacturing tolerances. A sinular analysis was performed for the VIIRS instruments polarization measurements to see how real world departures from perfectly linearly polarized light entering VIIRS effects the polarization measurement. The methodology and a few of the results of this polarization sensitivity analysis are presented and applied to the construction of a single polarizer which will cover the VIIRS VIS/NIR spectral range. Keywords: VIIRS, polarization, ray, trace; polarizers, Bolder Vision, MOXTEK
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Nuclear Data Needs for the Neutronic Design of MYRRHA Fast Spectrum Research Reactor
NASA Astrophysics Data System (ADS)
Stankovskiy, A.; Malambu, E.; Van den Eynde, G.; Díez, C. J.
2014-04-01
A global sensitivity analysis of effective neutron multiplication factor to the change of nuclear data library has been performed. It revealed that the test version of JEFF-3.2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than JEFF-3.1.2 does. The analysis of contributions of individual evaluations into keff sensitivity resulted in the priority list of nuclides, uncertainties on cross sections and fission neutron multiplicities of which have to be improved by setting up dedicated differential and integral experiments.
[Analysis and experimental verification of sensitivity and SNR of laser warning receiver].
Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue
2009-01-01
In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.
NASA Astrophysics Data System (ADS)
Kala, Zdeněk; Kala, Jiří
2011-09-01
The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.
Rahman, Tanzina; Millwater, Harry; Shipley, Heather J
2014-11-15
Aluminum oxide nanoparticles have been widely used in various consumer products and there are growing concerns regarding their exposure in the environment. This study deals with the modeling, sensitivity analysis and uncertainty quantification of one-dimensional transport of nano-sized (~82 nm) aluminum oxide particles in saturated sand. The transport of aluminum oxide nanoparticles was modeled using a two-kinetic-site model with a blocking function. The modeling was done at different ionic strengths, flow rates, and nanoparticle concentrations. The two sites representing fast and slow attachments along with a blocking term yielded good agreement with the experimental results from the column studies of aluminum oxide nanoparticles. The same model was used to simulate breakthrough curves under different conditions using experimental data and calculated 95% confidence bounds of the generated breakthroughs. The sensitivity analysis results showed that slow attachment was the most sensitive parameter for high influent concentrations (e.g. 150 mg/L Al2O3) and the maximum solid phase retention capacity (related to blocking function) was the most sensitive parameter for low concentrations (e.g. 50 mg/L Al2O3). Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacuci, Dan G.; Favorite, Jeffrey A.
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Cacuci, Dan G.; Favorite, Jeffrey A.
2018-04-06
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
2016-11-08
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
Algorithm for automatic analysis of electro-oculographic data
2013-01-01
Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372
Savasoglu, Kaan; Payzin, Kadriye Bahriye; Ozdemirkiran, Fusun; Berber, Belgin
2015-08-01
To determine the use of the Quantitative Real Time PCR (RQ-PCR) assay follow-up with Chronic Myeloid Leukemia (CML) patients. Cross-sectional observational. Izmir Ataturk Education and Research Hospital, Izmir, Turkey, from 2009 to 2013. Cytogenetic, FISH, RQ-PCR test results from 177 CMLpatients' materials selected between 2009 - 2013 years was set up for comparison analysis. Statistical analysis was performed to compare between FISH, karyotype and RQ-PCR results of the patients. Karyotyping and FISH specificity and sensitivity rates determined by ROC analysis compared with RQ-PCR results. Chi-square test was used to compare test failure rates. Sensitivity and specificity values were determined for karyotyping 17.6 - 98% (p=0.118, p > 0.05) and for FISH 22.5 - 96% (p=0.064, p > 0.05) respectively. FISH sensitivity was slightly higher than karyotyping but there was calculated a strong correlation between them (p < 0.001). RQ-PCR test failure rate did not correlate with other two tests (p > 0.05); however, karyotyping and FISH test failure rate was statistically significant (p < 0.001). Besides, the situation needed for karyotype analysis, RQ-PCR assay can be used alone in the follow-up of CMLdisease.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-08-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
ERIC Educational Resources Information Center
Bowers, Alex J.; Sprott, Ryan; Taff, Sherry A.
2013-01-01
The purpose of this study is to review the literature on the most accurate indicators of students at risk of dropping out of high school. We used Relative Operating Characteristic (ROC) analysis to compare the sensitivity and specificity of 110 dropout flags across 36 studies. Our results indicate that 1) ROC analysis provides a means to compare…
Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra
2017-12-01
The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.
Gnambs, Timo; Kaspar, Kai
2015-12-01
In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Al-Saleh, Ayman; Alazzoni, Ashraf; Al Shalash, Saleh; Ye, Chenglin; Mbuagbaw, Lawrence; Thabane, Lehana; Jolly, Sanjit S.
2014-01-01
Background High-sensitivity cardiac troponin assays have been adopted by many clinical centres worldwide; however, clinicians are uncertain how to interpret the results. We sought to assess the utility of these assays in diagnosing acute myocardial infarction (MI). Methods We carried out a systematic review and meta-analysis of studies comparing high-sensitivity with conventional assays of cardiac troponin levels among adults with suspected acute MI in the emergency department. We searched MEDLINE, EMBASE and Cochrane databases up to April 2013 and used bivariable random-effects modelling to obtain summary parameters for diagnostic accuracy. Results We identified 9 studies that assessed the use of high-sensitivity troponin T assays (n = 9186 patients). The summary sensitivity of these tests in diagnosing acute MI at presentation to the emergency department was estimated to be 0.94 (95% confidence interval [CI] 0.89–0.97); for conventional tests, it was 0.72 (95% CI 0.63–0.79). The summary specificity was 0.73 (95% CI 0.64–0.81) for the high-sensitivity assay compared with 0.95 (95% CI 0.93–0.97) for the conventional assay. The differences in estimates of the summary sensitivity and specificity between the high-sensitivity and conventional assays were statistically significant (p < 0.01). The area under the curve was similar for both tests carried out 3–6 hours after presentation. Three studies assessed the use of high-sensitivity troponin I assays and showed similar results. Interpretation Used at presentation to the emergency department, the high-sensitivity cardiac troponin assay has improved sensitivity, but reduced specificity, compared with the conventional troponin assay. With repeated measurements over 6 hours, the area under the curve is similar for both tests, indicating that the major advantage of the high-sensitivity test is early diagnosis. PMID:25295240
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Young, Ewa; Zimerson, Erik; Bruze, Magnus; Svedman, Cecilia
2016-02-01
The results from a previous study indicated the presence of several possible sensitizers formed during oxidation of the potent sensitizer p-phenylenediamine (PPD) to which PPD-sensitized patients might react, in various patterns. To extract and analyse a yellow spot from a thin-layer chromatogram with oxidized PPD, to which 6 of 14 (43%) PPD-positive patients had reacted in a previous study, in order to identify potential sensitizer(s) and to patch test this/these substance(s) in the 14 PPD-positive patients. The yellow spot was extracted from a thin-layer chromatogram of oxidized PPD, and two substances, suspected to be allergens, were identified by analysis with gas chromatography mass spectrometry (GCMS). The 14 PPD-positive patients, who had been previously tested with the thin-layer chromatogram of oxidized PPD, participated in the investigation, and were tested with dilutions of the two substances. GCMS analysis identified 4-nitroaniline and 4,4'-azodianiline in the yellow spot. Of the 14 PPD-positive test patients, 5 (36%) reacted to 4-nitroaniline and 9 (64%) reacted to 4,4'-azodianiline. The results show that 4-nitroaniline and 4,4'-azodianiline, formed during oxidation of PPD, are potent sensitizers. PPD-sensitized patients react to a high extent to concentrations equimolar to PPD of 4-nitroaniline and 4,4'-azodianiline. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mohammadkhani, Parvaneh; Pourshahbaz, Abbas; Kami, Maryam; Mazidi, Mahdi; Abasi, Imaneh
2016-01-01
Objective: Generalized anxiety disorder is one of the most common anxiety disorders in the general population. Several studies suggest that anxiety sensitivity is a vulnerability factor in generalized anxiety severity. However, some other studies suggest that negative repetitive thinking and experiential avoidance as response factors can explain this relationship. Therefore, this study aimed to investigate the mediating role of experiential avoidance and negative repetitive thinking in the relationship between anxiety sensitivity and generalized anxiety severity. Method: This was a cross-sectional and correlational study. A sample of 475 university students was selected through stratified sampling method. The participants completed Anxiety Sensitivity Inventory-3, Acceptance and Action Questionnaire-II, Perseverative Thinking Questionnaire, and Generalized Anxiety Disorder 7-item Scale. Data were analyzed by Pearson correlation, multiple regression analysis and path analysis. Results: The results revealed a positive relationship between anxiety sensitivity, particularly cognitive anxiety sensitivity, experiential avoidance, repetitive thinking and generalized anxiety severity. In addition, findings showed that repetitive thinking, but not experiential avoidance, fully mediated the relationship between cognitive anxiety sensitivity and generalized anxiety severity. α Level was p<0.005. Conclusion: Consistent with the trans-diagnostic hypothesis, anxiety sensitivity predicts generalized anxiety severity, but its effect is due to the generating repetitive negative thought. PMID:27928245
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.
2016-01-01
Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Analysis of Composite Panels Subjected to Thermo-Mechanical Loads
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1999-01-01
The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.
NASA Astrophysics Data System (ADS)
Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui
2016-02-01
The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.
Mitchell, Rebecca; Thomas, Sunethra Devika; Langlois, Neil E I
2013-10-01
Biochemical analysis of glucose and ketones in the vitreous humour obtained at post-mortem examination is representative of the levels in the blood prior to death. Elevated levels can be indicative of conditions including diabetic ketoacidosis, which can be a cause for unexpected death. A rapid screening test for such conditions can be performed during the autopsy through urinalysis using test strips (urine 'dipstick' testing). The aim of this study was to assess the utility of urinalysis testing for post-mortem detection of derangements of glucose and ketone levels. The results of vitreous biochemical analysis and urinalysis were collated from 188 forensic autopsy cases. A vitreous glucose result of above 10 mmol/L was regarded as high. When this was compared to urinalysis results it was found that any urinalysis result above negative had a sensitivity of 0.83 and a specificity of 0.93. A vitreous ketone level of above 5 mmol/L was regarded as significantly elevated; a urinalysis result above negative had a sensitivity of 1, but a specificity of 0.12. Urinalysis ('dipstick' testing) for glucose has a good sensitivity and specificity for high vitreous glucose levels, which are regarded as indicative of pathological hyperglycaemia during life. It was found that urine testing for ketones either has an excellent sensitivity with low specificity or a poor sensitivity with a good specificity; however, this finding has to be viewed in the context of uncertainty of the biochemical level of significant ketosis.
Bialosky, Joel E.; Robinson, Michael E.
2014-01-01
Background Cluster analysis can be used to identify individuals similar in profile based on response to multiple pain sensitivity measures. There are limited investigations into how empirically derived pain sensitivity subgroups influence clinical outcomes for individuals with spine pain. Objective The purposes of this study were: (1) to investigate empirically derived subgroups based on pressure and thermal pain sensitivity in individuals with spine pain and (2) to examine subgroup influence on 2-week clinical pain intensity and disability outcomes. Design A secondary analysis of data from 2 randomized trials was conducted. Methods Baseline and 2-week outcome data from 157 participants with low back pain (n=110) and neck pain (n=47) were examined. Participants completed demographic, psychological, and clinical information and were assessed using pain sensitivity protocols, including pressure (suprathreshold pressure pain) and thermal pain sensitivity (thermal heat threshold and tolerance, suprathreshold heat pain, temporal summation). A hierarchical agglomerative cluster analysis was used to create subgroups based on pain sensitivity responses. Differences in data for baseline variables, clinical pain intensity, and disability were examined. Results Three pain sensitivity cluster groups were derived: low pain sensitivity, high thermal static sensitivity, and high pressure and thermal dynamic sensitivity. There were differences in the proportion of individuals meeting a 30% change in pain intensity, where fewer individuals within the high pressure and thermal dynamic sensitivity group (adjusted odds ratio=0.3; 95% confidence interval=0.1, 0.8) achieved successful outcomes. Limitations Only 2-week outcomes are reported. Conclusions Distinct pain sensitivity cluster groups for individuals with spine pain were identified, with the high pressure and thermal dynamic sensitivity group showing worse clinical outcome for pain intensity. Future studies should aim to confirm these findings. PMID:24764070
FABRIC FILTER MODEL SENSITIVITY ANALYSIS
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was bas...
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.
Kepes, Sven; McDaniel, Michael A
2015-01-01
Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.
NASA Astrophysics Data System (ADS)
Arai, Kohei
2012-07-01
More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha
2013-01-01
Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167
Influence of ECG sampling rate in fetal heart rate variability analysis.
De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R
2017-07-01
Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
The art of maturity modeling. Part 2. Alternative models and sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waples, D.W.; Suizu, Masahiro; Kamata, Hiromi
1992-01-01
The sensitivity of exploration decisions to variations in several input parameters for maturity modeling was examined for the MITI Rumoi well, Hokkaido, Japan. Decisions were almost completely insensitive to uncertainties about formation age and erosional removal across some unconformities, but were more sensitive to changes in removal during unconformities which occurred near maximum paleotemperatures. Exploration decisions were not very sensitive to the choice of a particular kinetic model for hydrocarbon generation. Uncertainties in kerogen type and the kinetics of different kerogen types are more serious than differences among the various kinetic models. Results of modeling using the TTI method weremore » unsatisfactory. Thermal history and timing and amount of hydrocarbon generation estimated or calculated using the TTI method were greatly different from those obtained using a purely kinetic model. The authors strongly recommend use of the kinetic R{sub o} method instead of the TTI method. If they had lacked measured R{sub o} data, subsurface temperature data, or both, their confidence in the modeling results would have been sharply reduced. Conceptual models for predicting heat flow and thermal conductivity are simply too weak at present to allow one to carry out highly meaningful modeling unless the input is constrained by measured data. Maturity modeling therefore requires the use of more, not fewer, measured temperature and maturity data. The use of sensitivity analysis in maturity modeling is very important for understanding the geologic system, for knowing what level of confidence to place on the results, and for determining what new types of data would be most necessary to improve confidence. Sensitivity analysis can be carried out easily using a rapid, interactive maturity-modeling program.« less
Zhang, Lifan; Shi, Xiaochun; Zhang, Yueqiu; Zhang, Yao; Huo, Feifei; Zhou, Baotong; Deng, Guohua; Liu, Xiaoqing
2017-08-10
T-SPOT.TB didn't perform a perfect diagnosis for active tuberculosis (ATB), and some factors may influence the results. We did this study to evaluate possible factors associated with the sensitivity and specificity of T-SPOT.TB, and the diagnostic parameters under varied conditions. Patients with suspected ATB were enrolled prospectively. Influencing factors of the sensitivity and specificity of T-SPOT.TB were evaluated using logistic regression models. Sensitivity, specificity, predictive values (PV), and likelihood ratios (LR) were calculated with consideration of relevant factors. Of the 865 participants, 205 (23.7%) had ATB, including 58 (28.3%) microbiologically confirmed TB and 147 (71.7%) clinically diagnosed TB. 615 (71.7%) were non-TB. 45 (5.2%) cases were clinically indeterminate and excluded from the final analysis. In multivariate analysis, serous effusion was the only independent risk factor related to lower sensitivity (OR = 0.39, 95% CI: 0.18-0.81) among patients with ATB. Among non-TB patients, age, TB history, immunosuppressive agents/glucocorticoid treatment and lymphocyte count were the independent risk factors related to specificity of T-SPOT.TB. Sensitivity, specificity, PV+, PV-, LR+ and LR- of T-SPOT.TB for diagnosis of ATB were 78.5%, 74.1%, 50.3%, 91.2%, 3.0 and 0.3, respectively. This study suggests that influencing factors of sensitivity and specificity of T-SPOT.TB should be considered for interpretation of T-SPOT.TB results.
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
The Miniaturization and Reproducibilty of the Cylinder Expansion Test
2011-10-01
new miniaturized and the standard one-inch test has been performed using the liquid explosive PLX ( nitromethane sensitized with ethylene diamine). The...explosive PLX ( nitromethane sensitized with ethylene diamine). The resulting velocity and displacement profiles obtained from the streak records...performing a measurement systems analysis on both the half- and one-inch tests using the liquid explosive PLX ( nitromethane sensitized with 5% (by wt
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Quantitative relationship between the local lymph node assay and human skin sensitization assays.
Schneider, K; Akkan, Z
2004-06-01
The local lymph node assay (LLNA) is a new test method which allows for the quantitative assessment of sensitizing potency in the mouse. Here, we investigate the quantitative correlation between results from the LLNA and two human sensitization tests--specifically, human repeat insult patch tests (HRIPTs) and human maximization tests (HMTs). Data for 57 substances were evaluated, of which 46 showed skin sensitizing properties in human tests, whereas 11 yielded negative results in humans. For better comparability data from mouse and human tests were transformed to applied doses per skin area, which ranged over four orders of magnitude for the substances considered. Regression analysis for the 46 human sensitizing substances revealed a significant positive correlation between the LLNA and human tests. The correlation was better between LLNA and HRIPT data (n=23; r=0.77) than between LLNA and HMT data (n=38; r=0.65). The observed scattering of data points is related to various uncertainties, in part associated with insufficiencies of data from older HMT studies. Predominantly negative results in the LLNA for another 11 substances which showed no skin sensitizing activity in human maximization tests further corroborate the correspondence between LLNA and human tests. Based on this analysis, the LLNA can be considered a reliable basis for relative potency assessments for skin sensitizers. Proposals are made for the regulatory exploitation of the LLNA: four potency groups can be established, and assignment of substances to these groups according to the outcome of the LLNA can be used to characterize skin sensitizing potency in substance-specific assessments. Moreover, based on these potency groups, a more adequate consideration of sensitizing substances in preparations becomes possible. It is proposed to replace the current single concentration limit for skin sensitizers in preparations, which leads to an all or nothing classification of a preparation as sensitizing to skin ("R43") in the European Union, by differentiated concentration limits derived from the limits for the four potency groups.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham
2013-01-01
Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
NASA Astrophysics Data System (ADS)
Suarez, Berta; Felez, Jesus; Maroto, Joaquin; Rodriguez, Pablo
2013-02-01
A sensitivity analysis has been performed to assess the influence of the inertial properties of railway vehicles on their dynamic behaviour. To do this, 216 dynamic simulations were performed modifying, one at a time, the masses, moments of inertia and heights of the centre of gravity of the carbody, the bogie and the wheelset. Three values were assigned to each parameter, corresponding to the percentiles 10, 50 and 90 of a data set stored in a database of railway vehicles. After processing the results of these simulations, the analysed parameters were sorted by increasing influence. It was also found which of these parameters could be estimated with a lesser degree of accuracy for future simulations without appreciably affecting the simulation results. In general terms, it was concluded that the most sensitive inertial properties are the mass and the vertical moment of inertia, and the least sensitive ones the longitudinal and lateral moments of inertia.
Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com
2015-04-07
We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
An empirical comparison of a dynamic software testability metric to static cyclomatic complexity
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.
1993-01-01
This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi
2013-11-01
Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P
2017-12-01
Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
Stochastic sensitivity measure for mistuned high-performance turbines
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe
1992-01-01
A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, Michel, E-mail: mlalonde15@rogers.com; Wassenaar, Richard; Wells, R. Glenn
2014-07-15
Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: Aboutmore » 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.« less
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L
2005-10-01
To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.
Performance of the dipstick screening test as a predictor of negative urine culture
Marques, Alexandre Gimenes; Doi, André Mario; Pasternak, Jacyr; Damascena, Márcio dos Santos; França, Carolina Nunes; Martino, Marinês Dalla Valle
2017-01-01
ABSTRACT Objective To investigate whether the urine dipstick screening test can be used to predict urine culture results. Methods A retrospective study conducted between January and December 2014 based on data from 8,587 patients with a medical order for urine dipstick test, urine sediment analysis and urine culture. Sensitivity, specificity, positive and negative predictive values were determined and ROC curve analysis was performed. Results The percentage of positive cultures was 17.5%. Nitrite had 28% sensitivity and 99% specificity, with positive and negative predictive values of 89% and 87%, respectively. Leukocyte esterase had 79% sensitivity and 84% specificity, with positive and negative predictive values of 51% and 95%, respectively. The combination of positive nitrite or positive leukocyte esterase tests had 85% sensitivity and 84% specificity, with positive and negative predictive values of 53% and 96%, respectively. Positive urinary sediment (more than ten leukocytes per microliter) had 92% sensitivity and 71% specificity, with positive and negative predictive values of 40% and 98%, respectively. The combination of nitrite positive test and positive urinary sediment had 82% sensitivity and 99% specificity, with positive and negative predictive values of 91% and 98%, respectively. The combination of nitrite or leukocyte esterase positive tests and positive urinary sediment had the highest sensitivity (94%) and specificity (84%), with positive and negative predictive values of 58% and 99%, respectively. Based on ROC curve analysis, the best indicator of positive urine culture was the combination of positives leukocyte esterase or nitrite tests and positive urinary sediment, followed by positives leukocyte and nitrite tests, positive urinary sediment alone, positive leukocyte esterase test alone, positive nitrite test alone and finally association of positives nitrite and urinary sediment (AUC: 0.845, 0.844, 0.817, 0.814, 0.635 and 0.626, respectively). Conclusion A negative urine culture can be predicted by negative dipstick test results. Therefore, this test may be a reliable predictor of negative urine culture. PMID:28444086
Labanca, Ludimila; Alves, Cláudia Regina Lindgren; Bragança, Lidia Lourenço Cunha; Dorim, Diego Dias Ramos; Alvim, Cristina Gonçalves; Lemos, Stela Maris Aguiar
2015-01-01
To establish cutoff points for the analysis of the Behavior Observation Form (BOF) of children in the ages of 2 to 23 months and evaluate the sensitivity and specificity by age group and domains (Emission, Reception, and Cognitive Aspects of Language). The sample consisted of 752 children who underwent BOF. Each child was classified as having appropriate language development for the age or having possible risk of language impairment. Performance Indicators (PI) were calculated in each domain as well as the overall PI in all domains. The values for sensitivity and specificity were also calculated. The cutoff points for possible risk of language impairment for each domain and each age group were obtained using the receiver operating characteristics curve. The results of the study revealed that one-third of the assessed children have a risk of language impairment in the first two years of life. The analysis of BOF showed high sensitivity (>90%) in all categories and in all age groups; however, the chance of false-positive results was higher than 20% in the majority of aspects evaluated. It was possible to establish the cutoff points for all categories and age groups with good correlation between sensitivity and specificity, except for the age group of 2 to 6 months. This study provides important contributions to the discussion on the evaluation of the language development of children younger than 2 years.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi
2017-07-28
Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
A retrospective analysis of preoperative staging modalities for oral squamous cell carcinoma.
Kähling, Ch; Langguth, T; Roller, F; Kroll, T; Krombach, G; Knitschke, M; Streckbein, Ph; Howaldt, H P; Wilbrand, J-F
2016-12-01
An accurate preoperative assessment of cervical lymph node status is a prerequisite for individually tailored cancer therapies in patients with oral squamous cell carcinoma. The detection of malignant spread and its treatment crucially influence the prognosis. The aim of the present study was to analyze the different staging modalities used among patients with a diagnosis of primary oral squamous cell carcinoma between 2008 and 2015. An analysis of preoperative staging findings, collected by clinical palpation, ultrasound, and computed tomography (CT), was performed. The results obtained were compared with the results of the final histopathological findings of the neck dissection specimens. A statistical analysis using McNemar's test was performed. The sensitivity of CT for the detection of malignant cervical tumor spread was 74.5%. The ultrasound obtained a sensitivity of 60.8%. Both CT and ultrasound demonstrated significantly enhanced sensitivity compared to the clinical palpation with a sensitivity of 37.1%. No significant difference was observed between CT and ultrasound. A combination of different staging modalities increased the sensitivity significantly compared with ultrasound staging alone. No significant difference in sensitivity was found between the combined use of different staging modalities and CT staging alone. The highest sensitivity, of 80.0%, was obtained by a combination of all three staging modalities: clinical palpation, ultrasound and CT. The present study indicates that CT has an essential role in the preoperative staging of patients with oral squamous cell carcinoma. Its use not only significantly increases the sensitivity of cervical lymph node metastasis detection but also offers a preoperative assessment of local tumor spread and resection borders. An additional non-invasive cervical lymph node examination increases the sensitivity of the tumor staging process and reduces the risk of occult metastasis. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
NASA Astrophysics Data System (ADS)
Pollard, Thomas B
Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity using uniform-electrode and shear-horizontal mode configurations on potassium-niobate, langasite, and quartz substrates. Optimum configurations are determined yielding maximum sensitivity. Results show mode propagation-loss and sensitivity to viscosity are correlated by a factor independent of substrate material. The analysis is useful for designing devices meeting sensitivity and signal level requirements. A novel, rapid and precise microfluidic chamber alignment/bonding method was developed for SAW platforms. The package is shown to have little effect on device performance and permits simple macrofluidic interfacing. Lastly, prototypes were designed, fabricated, and tested for viscosity and biosensor applications; results show ability to detect as low as 1% glycerol in water and surface-bound DNA crosslinking.
SPR based hybrid electro-optic biosensor for β-lactam antibiotics determination in water
NASA Astrophysics Data System (ADS)
Galatus, Ramona; Feier, Bogdan; Cristea, Cecilia; Cennamo, Nunzio; Zeni, Luigi
2017-09-01
The present work aims to provide a hybrid platform capable of complementary and sensitive detection of β-lactam antibiotics, ampicillin in particular. The use of an aptamer specific to ampicillin assures good selectivity and sensitivity for the detection of ampicillin from different matrice. This new approach is dedicated for a portable, remote sensing platform based on low-cost, small size and low-power consumption solution. The simple experimental hybrid platform integrates the results from the D-shape surface plasmon resonance plastic optical fiber (SPR-POF) and from the electrochemical (bio)sensor, for the analysis of ampicillin, delivering sensitive and reliable results. The SPR-POF already used in many previous applications is embedded in a new experimental setup with fluorescent fibers emitters, for broadband wavelength analysis, low-power consumption and low-heating capabilities of the sensing platform.
Quadrant photodetector sensitivity.
Manojlović, Lazo M
2011-07-10
A quantitative theoretical analysis of the quadrant photodetector (QPD) sensitivity in position measurement is presented. The Gaussian light spot irradiance distribution on the QPD surface was assumed to meet most of the real-life applications of this sensor. As the result of the mathematical treatment of the problem, we obtained, in a closed form, the sensitivity function versus the ratio of the light spot 1/e radius and the QPD radius. The obtained result is valid for the full range of the ratios. To check the influence of the finite light spot radius on the interaxis cross talk and linearity, we also performed a mathematical analysis to quantitatively measure these types of errors. An optimal range of the ratio of light spot radius and QPD radius has been found to simultaneously achieve low interaxis cross talk and high linearity of the sensor. © 2011 Optical Society of America
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia
NASA Astrophysics Data System (ADS)
Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM
2017-07-01
Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.
Pleshakova, Tatyana O; Malsagova, Kristina A; Kaysheva, Anna L; Kopylov, Arthur T; Tatur, Vadim Yu; Ziborov, Vadim S; Kanashenko, Sergey L; Galiullin, Rafael A; Ivanov, Yuri D
2017-08-01
We report here the highly sensitive detection of protein in solution at concentrations from 10 -15 to 10 -18 m using the combination of atomic force microscopy (AFM) and mass spectrometry. Biospecific detection of biotinylated bovine serum albumin was carried out by fishing out the protein onto the surface of AFM chips with immobilized avidin, which determined the specificity of the analysis. Electrical stimulation was applied to enhance the fishing efficiency. A high sensitivity of detection was achieved by application of nanosecond electric pulses to highly oriented pyrolytic graphite placed under the AFM chip. A peristaltic pump-based flow system, which is widely used in routine bioanalytical assays, was employed throughout the analysis. These results hold promise for the development of highly sensitive protein detection methods using nanosensor devices.
Spacecraft design sensitivity for a disaster warning satellite system
NASA Technical Reports Server (NTRS)
Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.
1977-01-01
A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.
Lee, Soon Young; Yang, Hee Jeong; Kim, Gawon; Cheong, Hae-Kwan; Choi, Bo Youl
2016-01-01
This study was performed to investigate the relationship between community residents' infection sensitivity and their levels of preventive behaviors during the 2015 Middle East Respiratory Syndrome (MERS) outbreak in Korea. Seven thousands two hundreds eighty one participants from nine areas in Gyeonggi-do including Pyeongtaek, the origin of the outbreak in 2015 agreed to participate in the survey and the data from 6,739 participants were included in the final analysis. The data on the perceived infection sensitivity were subjected to cluster analysis. The levels of stress, reliability/practice of preventive behaviors, hand washing practice and policy credibility during the outbreak period were analyzed for each cluster. Cluster analysis of infection sensitivity due to the MERS outbreak resulted in classification of participants into four groups: the non-sensitive group (14.5%), social concern group (17.4%), neutral group (29.1%), and overall sensitive group (39.0%). A logistic regression analysis found that the overall sensitive group with high sensitivity had higher stress levels (17.80; 95% confidence interval [CI], 13.77 to 23.00), higher reliability on preventive behaviors (5.81; 95% CI, 4.84 to 6.98), higher practice of preventive behaviors (4.53; 95% CI, 3.83 to 5.37) and higher practice of hand washing (2.71; 95% CI, 2.13 to 3.43) during the outbreak period, compared to the non-sensitive group. Infection sensitivity of community residents during the MERS outbreak correlated with gender, age, occupation, and health behaviors. When there is an outbreak in the community, there is need to maintain a certain level of sensitivity while reducing excessive stress, as well as promote the practice of preventive behaviors among local residents. In particular, target groups need to be notified and policies need to be established with a consideration of the socio-demographic characteristics of the community.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Colonic lesion characterization in inflammatory bowel disease: A systematic review and meta-analysis
Lord, Richard; Burr, Nicholas E; Mohammed, Noor; Subramanian, Venkataraman
2018-01-01
AIM To perform a systematic review and meta-analysis for the diagnostic accuracy of in vivo lesion characterization in colonic inflammatory bowel disease (IBD), using optical imaging techniques, including virtual chromoendoscopy (VCE), dye-based chromoendoscopy (DBC), magnification endoscopy and confocal laser endomicroscopy (CLE). METHODS We searched Medline, Embase and the Cochrane library. We performed a bivariate meta-analysis to calculate the pooled estimate sensitivities, specificities, positive and negative likelihood ratios (+LHR, -LHR), diagnostic odds ratios (DOR), and area under the SROC curve (AUSROC) for each technology group. A subgroup analysis was performed to investigate differences in real-time non-magnified Kudo pit patterns (with VCE and DBC) and real-time CLE. RESULTS We included 22 studies [1491 patients; 4674 polyps, of which 539 (11.5%) were neoplastic]. Real-time CLE had a pooled sensitivity of 91% (95%CI: 66%-98%), specificity of 97% (95%CI: 94%-98%), and an AUSROC of 0.98 (95%CI: 0.97-0.99). Magnification endoscopy had a pooled sensitivity of 90% (95%CI: 77%-96%) and specificity of 87% (95%CI: 81%-91%). VCE had a pooled sensitivity of 86% (95%CI: 62%-95%) and specificity of 87% (95%CI: 72%-95%). DBC had a pooled sensitivity of 67% (95%CI: 44%-84%) and specificity of 86% (95%CI: 72%-94%). CONCLUSION Real-time CLE is a highly accurate technology for differentiating neoplastic from non-neoplastic lesions in patients with colonic IBD. However, most CLE studies were performed by single expert users within tertiary centres, potentially confounding these results. PMID:29563760
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
NASA Astrophysics Data System (ADS)
Stepanov, E. V.; Milyaev, Varerii A.
2002-11-01
The application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air in biomedical diagnostics is discussed. The principle of operation and the design of a laser analyser for studying the composition of exhaled air are described. The results of detection of gaseous biomarkers in exhaled air, including clinical studies, which demonstrate the diagnostic possibilities of the method, are presented.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Mitochondrial sequence analysis for forensic identification using pyrosequencing technology.
Andréasson, H; Asp, A; Alderborn, A; Gyllensten, U; Allen, M
2002-01-01
Over recent years, requests for mtDNA analysis in the field of forensic medicine have notably increased, and the results of such analyses have proved to be very useful in forensic cases where nuclear DNA analysis cannot be performed. Traditionally, mtDNA has been analyzed by DNA sequencing of the two hypervariable regions, HVI and HVII, in the D-loop. DNA sequence analysis using the conventional Sanger sequencing is very robust but time consuming and labor intensive. By contrast, mtDNA analysis based on the pyrosequencing technology provides fast and accurate results from the human mtDNA present in many types of evidence materials in forensic casework. The assay has been developed to determine polymorphic sites in the mitochondrial D-loop as well as the coding region to further increase the discrimination power of mtDNA analysis. The pyrosequencing technology for analysis of mtDNA polymorphisms has been tested with regard to sensitivity, reproducibility, and success rate when applied to control samples and actual casework materials. The results show that the method is very accurate and sensitive; the results are easily interpreted and provide a high success rate on casework samples. The panel of pyrosequencing reactions for the mtDNA polymorphisms were chosen to result in an optimal discrimination power in relation to the number of bases determined.
An Equal Employment Opportunity Sensitivity Workshop
ERIC Educational Resources Information Center
Patten, Thomas H., Jr.; Dorey, Lester E.
1972-01-01
The equal employment opportunity sensitivity workshop seems to be a useful training device for getting an organization started on developing black and white change agents. A report on the establishment of such a workshop at the U.S. Army Tank Automotive Command (TACOM). Includes charts of design, characteristics, analysis of results, program…
Haley, Nicholas J.; Siepker, Chris; Hoon-Hanks , Laura L.; Mitchell, Gordon; Walter, W. David; Manca, Matteo; Monello, Ryan J.; Powers, Jenny G.; Wild, Margaret A.; Hoover, Edward A.; Caughey, Byron; Richt, Jürgen a.; Fenwick, B.W.
2016-01-01
Chronic wasting disease (CWD), a transmissible spongiform encephalopathy of cervids, was first documented nearly 50 years ago in Colorado and Wyoming and has since been detected across North America and the Republic of Korea. The expansion of this disease makes the development of sensitive diagnostic assays and antemortem sampling techniques crucial for the mitigation of its spread; this is especially true in cases of relocation/reintroduction or prevalence studies of large or protected herds, where depopulation may be contraindicated. This study evaluated the sensitivity of the real-time quaking-induced conversion (RT-QuIC) assay of recto-anal mucosa-associated lymphoid tissue (RAMALT) biopsy specimens and nasal brushings collected antemortem. These findings were compared to results of immunohistochemistry (IHC) analysis of ante- and postmortem samples. RAMALT samples were collected from populations of farmed and free-ranging Rocky Mountain elk (Cervus elaphus nelsoni; n = 323), and nasal brush samples were collected from a subpopulation of these animals (n = 205). We hypothesized that the sensitivity of RT-QuIC would be comparable to that of IHC analysis of RAMALT and would correspond to that of IHC analysis of postmortem tissues. We found RAMALT sensitivity (77.3%) to be highly correlative between RT-QuIC and IHC analysis. Sensitivity was lower when testing nasal brushings (34%), though both RAMALT and nasal brush test sensitivities were dependent on both the PRNP genotype and disease progression determined by the obex score. These data suggest that RT-QuIC, like IHC analysis, is a relatively sensitive assay for detection of CWD prions in RAMALT biopsy specimens and, with further investigation, has potential for large-scale and rapid automated testing of antemortem samples for CWD.
USEPA EXAMPLE EXIT LEVEL ANALYSIS RESULTS
Developed by NERL/ERD for the Office of Solid Waste, the enclosed product provides an example uncertainty analysis (UA) and initial process-based sensitivity analysis (SA) of hazardous waste "exit" concentrations for 7 chemicals and metals using the 3MRA Version 1.0 Modeling Syst...
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Lalonde, Michel; Wells, R Glenn; Birnie, David; Ruddy, Terrence D; Wassenaar, Richard
2014-07-01
Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.
Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment
NASA Technical Reports Server (NTRS)
Lee, Meemong; Bowman, Kevin
2014-01-01
Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Phase 1 of the automated array assembly task of the low cost silicon solar array project
NASA Technical Reports Server (NTRS)
Pryor, R. A.; Grenon, L. A.; Coleman, M. G.
1978-01-01
The results of a study of process variables and solar cell variables are presented. Interactions between variables and their effects upon control ranges of the variables are identified. The results of a cost analysis for manufacturing solar cells are discussed. The cost analysis includes a sensitivity analysis of a number of cost factors.
Brock Stewart; Chris J. Cieszewski; Michal Zasada
2005-01-01
This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...
UTI diagnosis and antibiogram using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Kastanos, Evdokia; Kyriakides, Alexandros; Hadjigeorgiou, Katerina; Pitris, Constantinos
2009-07-01
Urinary tract infection diagnosis and antibiogram require a 48 hour waiting period using conventional methods. This results in ineffective treatments, increased costs and most importantly in increased resistance to antibiotics. In this work, a novel method for classifying bacteria and determining their sensitivity to an antibiotic using Raman spectroscopy is described. Raman spectra of three species of gram negative Enterobacteria, most commonly responsible for urinary tract infections, were collected. The study included 25 samples each of E.coli, Klebsiella p. and Proteus spp. A novel algorithm based on spectral ratios followed by discriminant analysis resulted in classification with over 94% accuracy. Sensitivity and specificity for the three types of bacteria ranged from 88-100%. For the development of an antibiogram, bacterial samples were treated with the antibiotic ciprofloxacin to which they were all sensitive. Sensitivity to the antibiotic was evident after analysis of the Raman signatures of bacteria treated or not treated with this antibiotic as early as two hours after exposure. This technique can lead to the development of new technology for urinary tract infection diagnosis and antibiogram with same day results, bypassing urine cultures and avoiding all undesirable consequences of current practice.
Urinary tract infection diagnosis and response to antibiotics using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Kastanos, Evdokia; Kyriakides, Alexandros; Hadjigeorgiou, Katerina; Pitris, Constantinos
2009-02-01
Urinary tract infection diagnosis and antibiogram require a 48 hour waiting period using conventional methods. This results in ineffective treatments, increased costs and most importantly in increased resistance to antibiotics. In this work, a novel method for classifying bacteria and determining their sensitivity to an antibiotic using Raman spectroscopy is described. Raman spectra of three species of gram negative Enterobacteria, most commonly responsible for urinary tract infections, were collected. The study included 25 samples each of E.coli, Klebsiella p. and Proteus spp. A novel algorithm based on spectral ratios followed by discriminant analysis resulted in classification with over 94% accuracy. Sensitivity and specificity for the three types of bacteria ranged from 88-100%. For the development of an antibiogram, bacterial samples were treated with the antibiotic ciprofloxacin to which they were all sensitive. Sensitivity to the antibiotic was evident after analysis of the Raman signatures of bacteria treated or not treated with this antibiotic as early as two hours after exposure. This technique can lead to the development of new technology for urinary tract infection diagnosis and antibiogram with same day results, bypassing urine cultures and avoiding all undesirable consequences of current practice.
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
Sandstedt, Mikael; Jonsson, Marianne; Asp, Julia; Dellgren, Göran; Lindahl, Anders; Jeppsson, Anders; Sandstedt, Joakim
2015-12-01
Flow cytometry (FCM) has become a well-established method for analysis of both intracellular and cell-surface proteins, while quantitative RT-PCR (RT-qPCR) is used to determine gene expression with high sensitivity and specificity. Combining these two methods would be of great value. The effects of intracellular staining on RNA integrity and RT-qPCR sensitivity and quality have not, however, been fully examined. We, therefore, intended to assess these effects further. Cells from the human lung cancer cell line A549 were fixed, permeabilized and sorted by FCM. Sorted cells were analyzed using RT-qPCR. RNA integrity was determined by RNA quality indicator analysis. A549 cells were then mixed with cells of the mouse cardiomyocyte cell line HL-1. A549 cells were identified by the cell surface marker ABCG2, while HL-1 cells were identified by intracellular cTnT. Cells were sorted and analyzed by RT-qPCR. Finally, cell cultures from human atrial biopsies were used to evaluate the effects of fixation and permeabilization on RT-qPCR analysis of nonimmortalized cells stored prior to analysis by FCM. A large amount of RNA could be extracted even when cells had been fixed and permeabilized. Permeabilization resulted in increased RNA degradation and a moderate decrease in RT-qPCR sensitivity. Gene expression levels were also affected to a moderate extent. Sorted populations from the mixed A549 and HL-1 cell samples showed gene expression patterns that corresponded to FCM data. When samples were stored before FCM sorting, the RT-qPCR analysis could still be performed with high sensitivity and quality. In summary, our results show that intracellular FCM may be performed with only minor impairment of the RT-qPCR sensitivity and quality when analyzing sorted cells; however, these effects should be considered when comparing RT-qPCR data of not fixed samples with those of fixed and permeabilized samples. © 2015 International Society for Advancement of Cytometry.
Sensitivity assessment of freshwater macroinvertebrates to pesticides using biological traits.
Ippolito, A; Todeschini, R; Vighi, M
2012-03-01
Assessing the sensitivity of different species to chemicals is one of the key points in predicting the effects of toxic compounds in the environment. Trait-based predicting methods have proved to be extremely efficient for assessing the sensitivity of macroinvertebrates toward compounds with non specific toxicity (narcotics). Nevertheless, predicting the sensitivity of organisms toward compounds with specific toxicity is much more complex, since it depends on the mode of action of the chemical. The aim of this work was to predict the sensitivity of several freshwater macroinvertebrates toward three classes of plant protection products: organophosphates, carbamates and pyrethroids. Two databases were built: one with sensitivity data (retrieved, evaluated and selected from the U.S. Environmental Protection Agency ECOTOX database) and the other with biological traits. Aside from the "traditional" traits usually considered in ecological analysis (i.e. body size, respiration technique, feeding habits, etc.), multivariate analysis was used to relate the sensitivity of organisms to some other characteristics which may be involved in the process of intoxication. Results confirmed that, besides traditional biological traits, related to uptake capability (e.g. body size and body shape) some traits more related to particular metabolic characteristics or patterns have a good predictive capacity on the sensitivity to these kinds of toxic substances. For example, behavioral complexity, assumed as an indicator of nervous system complexity, proved to be an important predictor of sensitivity towards these compounds. These results confirm the need for more complex traits to predict effects of highly specific substances. One key point for achieving a complete mechanistic understanding of the process is the choice of traits, whose role in the discrimination of sensitivity should be clearly interpretable, and not only statistically significant.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.
Kanchanaketu, T; Sangduen, N; Toojinda, T; Hongtrakul, V
2012-04-13
Genetic analysis of 56 samples of Jatropha curcas L. collected from Thailand and other countries was performed using the methylation-sensitive amplification polymorphism (MSAP) technique. Nine primer combinations were used to generate MSAP fingerprints. When the data were interpreted as amplified fragment length polymorphism (AFLP) markers, 471 markers were scored. All 56 samples were classified into three major groups: γ-irradiated, non-toxic and toxic accessions. Genetic similarity among the samples was extremely high, ranging from 0.95 to 1.00, which indicated very low genetic diversity in this species. The MSAP fingerprint was further analyzed for DNA methylation polymorphisms. The results revealed differences in the DNA methylation level among the samples. However, the samples collected from saline areas and some species hybrids showed specific DNA methylation patterns. AFLP data were used, together with methylation-sensitive AFLP (MS-AFLP) data, to construct a phylogenetic tree, resulting in higher efficiency to distinguish the samples. This combined analysis separated samples previously grouped in the AFLP analysis. This analysis also distinguished some hybrids. Principal component analysis was also performed; the results confirmed the separation in the phylogenetic tree. Some polymorphic bands, involving both nucleotide and DNA methylation polymorphism, that differed between toxic and non-toxic samples were identified, cloned and sequenced. BLAST analysis of these fragments revealed differences in DNA methylation in some known genes and nucleotide polymorphism in chloroplast DNA. We conclude that MSAP is a powerful technique for the study of genetic diversity for organisms that have a narrow genetic base.
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
2007-01-01
multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner
Woolgar, Alexandra; Golland, Polina; Bode, Stefan
2014-09-01
Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. ("Confounds in multivariate pattern analysis: Theory and rule representation case study", 2013, NeuroImage 77: 157-165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach. Copyright © 2014 Elsevier Inc. All rights reserved.
PESTAN: Pesticide Analytical Model Version 4.0 User's Guide
The principal objective of this User's Guide to provide essential information on the aspects such as model conceptualization, model theory, assumptions and limitations, determination of input parameters, analysis of results and sensitivity analysis.
Characterization of emission microscopy and liquid crystal thermography in IC fault localization
NASA Astrophysics Data System (ADS)
Lau, C. K.; Sim, K. S.
2013-05-01
This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.
Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R
2017-11-17
This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Cha, Eunju; Kim, Sohee; Kim, Ho Jun; Lee, Kang Mi; Kim, Ki Hun; Kwon, Oh-Seung; Lee, Jaeick
2015-01-01
This study compared the sensitivity of various separation and ionization methods, including gas chromatography with an electron ionization source (GC-EI), liquid chromatography with an electrospray ionization source (LC-ESI), and liquid chromatography with a silver ion coordination ion spray source (LC-Ag(+) CIS), coupled to a mass spectrometer (MS) for steroid analysis. Chromatographic conditions, mass spectrometric transitions, and ion source parameters were optimized. The majority of steroids in GC-EI/MS/MS and LC-Ag(+) CIS/MS/MS analysis showed higher sensitivities than those obtained with other analytical methods. The limits of detection (LODs) of 65 steroids by GC-EI/MS/MS, 68 steroids by LC-Ag(+) CIS/MS/MS, 56 steroids by GC-EI/MS, 54 steroids by LC-ESI/MS/MS, and 27 steroids by GC-ESI/MS/MS were below cut-off value of 2.0 ng/mL. LODs of steroids that formed protonated ions in LC-ESI/MS/MS analysis were all lower than the cut-off value. Several steroids such as unconjugated C3-hydroxyl with C17-hydroxyl structure showed higher sensitivities in GC-EI/MS/MS analysis relative to those obtained using the LC-based methods. The steroids containing 4, 9, 11-triene structures showed relatively poor sensitivities in GC-EI/MS and GC-ESI/MS/MS analysis. The results of this study provide information that may be useful for selecting suitable analytical methods for confirmatory analysis of steroids. Copyright © 2015 John Wiley & Sons, Ltd.
Improving the analysis of slug tests
McElwee, C.D.
2002-01-01
This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wigley, T L; Ammann, C M; Santer, B D
2005-04-22
[1] Douglass and Knox [2005], hereafter referred to as DK, present an analysis of the observed cooling following the 1991 Mt. Pinatubo eruption and claim that these data imply a very low value for the climate sensitivity (equivalent to 0.6 C equilibrium warming for a CO{sub 2} doubling). We show here that their analysis is flawed and their results are incorrect.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Algorithm for automatic analysis of electro-oculographic data.
Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti
2013-10-25
Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.
Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups
NASA Astrophysics Data System (ADS)
Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).
Wang, W; Zhang, M; Chen, H D; Cai, X X; Xu, M L; Lei, K Y; Niu, J H; Deng, L; Liu, J; Ge, Z J; Yu, S X; Wang, B H
2016-10-06
In this study, a methylation-sensitive amplification polymorphism analysis system was used to analyze DNA methylation level in three cotton accessions. Two disease-sensitive near-isogenic lines, PD94042 and IL41, and one disease-resistant Gossypium mustelinum accession were exposed to Verticillium wilt, to investigate molecular disease resistance mechanisms in cotton. We observed multiple different DNA methylation types across the three accessions following Verticillium wilt exposure. These included hypomethylation, hypermethylation, and other patterns. In general, the global DNA methylation level was significantly increased in the disease-resistant accession G. mustelinum following disease exposure. In contrast, there was no significant difference in the disease-sensitive accession PD94042, and a significant decrease was observed in IL41. Our results suggest that disease-resistant cotton might employ a mechanism to increase methylation level in response to disease stress. The differing methylation patterns, together with the increase in global DNA methylation level, might play important roles in tolerance to Verticillium wilt in cotton. Through cloning and analysis of differently methylated DNA sequences, we were also able to identify several genes that may contribute to disease resistance in cotton. Our results revealed the effect of DNA methylation on cotton disease resistance, and also identified genes that played important roles, which may shed light on the future cotton disease-resistant molecular breeding.
The Einstein Observatory Extended Medium-Sensitivity Survey. I - X-ray data and analysis
NASA Technical Reports Server (NTRS)
Gioia, I. M.; Maccacaro, T.; Schild, R. E.; Wolter, A.; Stocke, J. T.
1990-01-01
This paper presents the results of the analysis of the X-ray data and the optical identification for the Einstein Observatory Extended Medium-Sensitivity Survey (EMSS). The survey consists of 835 serendipitous sources detected at or above 4 times the rms level in 1435 imaging proportional counter fields with centers located away from the Galactic plane. Their limiting sensitivities are about (5-300) x 10 to the -14th ergs/sq cm sec in the 0.3-3.5-keV energy band. A total area of 778 square deg of the high-Galactic-latitude sky has been covered. The data have been analyzed using the REV1 processing system, which takes into account the nonuniformities of the detector. The resulting EMSS catalog of X-ray sources is a flux-limited and homogeneous sample of astronomical objects that can be used for statistical studies.
NASA Technical Reports Server (NTRS)
Lockwood, H. E.
1975-01-01
A color film with a sensitivity and color balance equal to SO-368, Kodak MS Ektachrome (Estar thin base) was required for use on the Apollo-Soyuz test project (ASTP). A Wratten 2A filter was required for use with the film to reduce short wavelength effects which frequently produce a blue color balance in aerial photographs. The background regarding a special emulsion which was produced with a 2A filter equivalent as an integral part of an SO-368 film manufactured by Eastman Kodak, the cost for production of the special film, and the results of a series of tests made within PTD to certify the film for ASTP use are documented. The tests conducted and documented were physical inspection, process compatibility, effective sensitivity, color balance, cross section analysis, resolution, spectral sensitivity, consistency of results, and picture sample analysis.
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
AKAP150-mediated TRPV1 sensitization is disrupted by calcium/calmodulin
2011-01-01
Background The transient receptor potential vanilloid type1 (TRPV1) is expressed in nociceptive sensory neurons and is sensitive to phosphorylation. A-Kinase Anchoring Protein 79/150 (AKAP150) mediates phosphorylation of TRPV1 by Protein Kinases A and C, modulating channel activity. However, few studies have focused on the regulatory mechanisms that control AKAP150 association with TRPV1. In the present study, we identify a role for calcium/calmodulin in controlling AKAP150 association with, and sensitization of, TRPV1. Results In trigeminal neurons, intracellular accumulation of calcium reduced AKAP150 association with TRPV1 in a manner sensitive to calmodulin antagonism. This was also observed in transfected Chinese hamster ovary (CHO) cells, providing a model for conducting molecular analysis of the association. In CHO cells, the deletion of the C-terminal calmodulin-binding site of TRPV1 resulted in greater association with AKAP150, and increased channel activity. Furthermore, the co-expression of wild-type calmodulin in CHOs significantly reduced TRPV1 association with AKAP150, as evidenced by total internal reflective fluorescence-fluorescence resonance energy transfer (TIRF-FRET) analysis and electrophysiology. Finally, dominant-negative calmodulin co-expression increased TRPV1 association with AKAP150 and increased basal and PKA-sensitized channel activity. Conclusions the results from these studies indicate that calcium/calmodulin interferes with the association of AKAP150 with TRPV1, potentially extending resensitization of the channel. PMID:21569553
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Compliance and stress sensitivity of spur gear teeth
NASA Technical Reports Server (NTRS)
Cornell, R. W.
1983-01-01
The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.
Rex, J H; Hanson, L H; Amantea, M A; Stevens, D A; Bennett, J E
1991-01-01
An improved bioassay for fluconazole was developed. This assay is sensitive in the clinically relevant range (2 to 40 micrograms/ml) and analyzes plasma, serum, and cerebrospinal fluid specimens; bioassay results correlate with results obtained by high-pressure liquid chromatography (HPLC). Bioassay and HPLC analyses of spiked plasma, serum, and cerebrospinal fluid samples (run as unknowns) gave good agreement with expected values. Analysis of specimens from patients gave equivalent results by both HPLC and bioassay. HPLC had a lower within-run coefficient of variation (less than 2.5% for HPLC versus less than 11% for bioassay) and a lower between-run coefficient of variation (less than 5% versus less than 12% for bioassay) and was more sensitive (lower limit of detection, 0.1 micrograms/ml [versus 2 micrograms/ml for bioassay]). The bioassay is, however, sufficiently accurate and sensitive for clinical specimens, and its relative simplicity, low sample volume requirement, and low equipment cost should make it the technique of choice for analysis of routine clinical specimens. PMID:1854166
Worakhunpiset, S; Tharnpoophasiam, P
2009-07-01
Although multiplex PCR amplification condition for simultaneous detection of total coliform bacteria, Escherichia coli and Clostridium perfringens in water sample has been developed, results with high sensitivity are obtained when amplifying purified DNA, but the sensitivity is low when applied to spiked water samples. An enrichment broth culture prior PCR analysis increases sensitivity of the test but the specific nature of enrichment broth can affect the PCR results. Three enrichment broths, lactose broth, reinforced clostridial medium and fluid thioglycollate broth, were compared for their influence on sensitivity and on time required with multiplex PCR assay. Fluid thioglycollate broth was the most effective with shortest enrichment time and lowest detection limit.
Chepurnov, A A; Dadaeva, A A; Kolesnikov, S I
2001-12-01
Pathophysiological parameters were compared in animals with different sensitivity to Ebola virus infected with this virus. Analysis of the results showed the differences in immune reactions underlying the difference between Ebola-sensitive and Ebola-resistant animals. No neutrophil activation in response to Ebola virus injection was noted in Ebola-sensitive animal. Phagocytic activity of neutrophils in these animals inversely correlated with animal sensitivity to Ebola virus. Animal susceptibility to Ebola virus directly correlated with the decrease in the number of circulating T and B cells. We conclude that the immune system plays the key role in animal susceptibility and resistance to Ebola virus.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Winkle, W.; Christensen, S.W.; Kauffman, G.
1976-12-01
The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison ofmore » the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.« less
The Effects of the Japan Bridge Project on Third Graders' Cultural Sensitivity
ERIC Educational Resources Information Center
Meyer, Lindsay; Sherman, Lilian; MaKinster, James
2006-01-01
This study examines the effects of the Japan BRIDGE Project, a global education program, on its third grade participants. Characterization of lessons and analysis of student interviews were used to investigate the nature of the curriculum and whether or not student participants were more culturally sensitive due to participation. Results indicate…
Tosi, L L; Detsky, A S; Roye, D P; Morden, M L
1987-01-01
Using a decision analysis model, we estimated the savings that might be derived from a mass prenatal screening program aimed at detecting open neural tube defects (NTDs) in low-risk pregnancies. Our baseline analysis showed that screening v. no screening could be expected to save approximately $8 per pregnancy given a cost of $7.50 for the maternal serum alpha-feto-protein (MSAFP) test and a cost of $42,507 for hospital and rehabilitation services for the first 10 years of life for a child with spina bifida. When a more liberal estimate of the costs of caring for such a child was used, the savings with the screening program were more substantial. We performed extensive sensitivity analyses, which showed that the savings were somewhat sensitive to the cost of the MSAFP test and highly sensitive to the specificity (but not the sensitivity) of the test. A screening program for NTDs in low-risk pregnancies may result in substantial savings in direct health care costs if the screening protocol is followed rigorously and efficiently. PMID:2433011
NASA Astrophysics Data System (ADS)
Elshafei, Y.; Tonts, M.; Sivapalan, M.; Hipsey, M. R.
2016-06-01
It is increasingly acknowledged that effective management of water resources requires a holistic understanding of the coevolving dynamics inherent in the coupled human-hydrology system. One of the fundamental information gaps concerns the sensitivity of coupled system feedbacks to various endogenous system properties and exogenous societal contexts. This paper takes a previously calibrated sociohydrology model and applies an idealized implementation, in order to: (i) explore the sensitivity of emergent dynamics resulting from bidirectional feedbacks to assumptions regarding (a) internal system properties that control the internal dynamics of the coupled system and (b) the external sociopolitical context; and (ii) interpret the results within the context of water resource management decision making. The analysis investigates feedback behavior in three ways, (a) via a global sensitivity analysis on key parameters and assessment of relevant model outputs, (b) through a comparative analysis based on hypothetical placement of the catchment along various points on the international sociopolitical gradient, and (c) by assessing the effects of various direct management intervention scenarios. Results indicate the presence of optimum windows that might offer the greatest positive impact per unit of management effort. Results further advocate management tools that encourage an adaptive learning, community-based approach with respect to water management, which are found to enhance centralized policy measures. This paper demonstrates that it is possible to use a place-based sociohydrology model to make abstractions as to the dynamics of bidirectional feedback behavior, and provide insights as to the efficacy of water management tools under different circumstances.
Novel and Practical Scoring Systems for the Diagnosis of Thyroid Nodules
Wei, Ying; Zhou, Xinrong; Liu, Siyue; Wang, Hong; Liu, Limin; Liu, Renze; Kang, Jinsong; Hong, Kai; Wang, Daowen; Yuan, Gang
2016-01-01
Objective The clinical management of patients with thyroid nodules that are biopsied by fine-needle aspiration cytology and yield indeterminate results remains unsettled. The BRAF V600E mutation has dubious diagnostic value due to its low sensitivity. Novel strategies are urgently needed to distinguish thyroid malignancies from thyroid nodules. Design This prospective study included 504 thyroid nodules diagnosed by ultrasonography from 468 patients, and fine-needle aspiration cytology was performed under ultrasound guidance. Cytology and molecular analysis, including BRAF V600E, RET/PTC1 and RET/PTC3, were conducted simultaneously. The cytology, ultrasonography results, and mutational status were gathered and analyzed together. Predictive scoring systems were designed using a combination of diagnostic parameters for ultrasonography, cytology and genetic analysis. The utility of the scoring systems was analyzed and compared to detection using the individual methods alone or combined. Result The sensitivity of scoring systema (ultrasonography, cytology, BRAF V600E, RET/PTC) was nearly identical to that of scoring systemb (ultrasonography, cytology, BRAF V600E); these were 91.0% and 90.2%, respectively. These sensitivities were significantly higher than those obtained using FNAC, genetic analysis and US alone or combined; their sensitivities were 63.9%, 70.7% and 87.2%, respectively. Scoring systemc (ultrasonography, cytology) was slightly inferior to the former two scoring systems but still had relatively high sensitivity and specificity (80.5% and 95.1%, respectively), which were significantly superior to those of single cytology, ultrasonography or genetic analysis. In nodules with uncertainty cytology, scoring systema, scoring systemb and scoring systemc could elevate the malignancy detection rates to 69.7%, 69.7% and 63.6%, respectively. Conclusion These three scoring systems were quick for clinicians to master and could provide quantified information to predict the probability of malignant nodules. Scoring systemb is recommended for improving the detection rate among nodules of uncertain cytology. PMID:27654865
Gay, Charles W; Alappattu, Meryl J; Coronado, Rogelio A; Horn, Maggie E; Bishop, Mark D
2013-01-01
Background Muscle-biased therapies (MBT) are commonly used to treat pain, yet several reviews suggest evidence for the clinical effectiveness of these therapies is lacking. Inadequate treatment parameters have been suggested to account for inconsistent effects across studies. Pain sensitivity may serve as an intermediate physiologic endpoint helping to establish optimal MBT treatment parameters. The purpose of this review was to summarize the current literature investigating the short-term effect of a single dose of MBT on pain sensitivity in both healthy and clinical populations, with particular attention to specific MBT parameters of intensity and duration. Methods A systematic search for articles meeting our prespecified criteria was conducted using Cumulative Index to Nursing and Allied Health Literature (CINAHL) and MEDLINE from the inception of each database until July 2012, in accordance with guidelines from the Preferred Reporting Items for Systematic reviews and Meta-Analysis. Relevant characteristics from studies included type, intensity, and duration of MBT and whether short-term changes in pain sensitivity and clinical pain were noted with MBT application. Study results were pooled using a random-effects model to estimate the overall effect size of a single dose of MBT on pain sensitivity as well as the effect of MBT, dependent on comparison group and population type. Results Reports from 24 randomized controlled trials (23 articles) were included, representing 36 MBT treatment arms and 29 comparative groups, where 10 groups received active agents, 11 received sham/inert treatments, and eight received no treatment. MBT demonstrated a favorable and consistent ability to modulate pain sensitivity. Short-term modulation of pain sensitivity was associated with short-term beneficial effects on clinical pain. Intensity of MBT, but not duration, was linked with change in pain sensitivity. A meta-analysis was conducted on 17 studies that assessed the effect of MBT on pressure pain thresholds. The results suggest that MBT had a favorable effect on pressure pain thresholds when compared with no-treatment and sham/inert groups, and effects comparable with those of other active treatments. Conclusion The evidence supports the use of pain sensitivity measures by future research to help elucidate optimal therapeutic parameters for MBT as an intermediate physiologic marker. PMID:23403507
The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.
Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing
2017-07-01
This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ significantly.
Convergence Estimates for Multidisciplinary Analysis and Optimization
NASA Technical Reports Server (NTRS)
Arian, Eyal
1997-01-01
A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.
Kocer, Derya; Sarıguzel, Fatma M; Karakukcu, Cıgdem
2014-08-01
The microscopic analysis of urine is essential for the diagnosis of patients with urinary tract infections. Quantitative urine culture is the 'gold standard' method for definitive diagnosis of urinary-tract infections, but it is labor-intensive, time consuming, and does not provide the same-day results. The aim of this study was to evaluate the analytical and diagnostic performance of the FUS200 (Changchun Dirui Industry, China), a new urine sedimentation analyzer in comparison to urine culture as the reference method. We evaluated 1000 urine samples, submitted for culture and urine analysis with a preliminary diagnosis of urinary-tract infection. Cut-off values for the FUS200 were determined by comparing the results with urine cultures. The cut-off values by the receiver operating characteristic (ROC) curve technique, sensitivity, and specificity were calculated for bacteria and white blood cells (WBCs). Among the 1000 urine specimens submitted for culture, 637 cultures (63.7%) were negative, and 363 were (36.3%) positive. The best cut-off values obtained from ROC analysis were 16/μL for bacteriuria (sensitivity: 82.3%, specificity: 58%), and 34/μL for WBCs (sensitivity: 72.3%, specificity: 65.2%). The area under the curve (AUC) for the bacteria and WBCs count were 0.71 (95% CI: 0.67-0.74) and, 0.72 (95% CI: 0.69-0.76) respectively. The most important requirement of a rapid diagnostic screening test is sensitivity, and, in this perspective, an unsatisfactory sensitivity by using bacteria recognition and quantification performed by the FUS200 analyzer has been observed. After further technical improvements in particle recognition and laboratory personnel training, the FUS200 might show better results.
Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y
2014-06-01
Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Weissberger, Gali H.; Strong, Jessica V.; Stefanidis, Kayla B.; Summers, Mathew J.; Bondi, Mark W.; Stricker, Nikki H.
2018-01-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer’s dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer’s disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers. PMID:28940127
Clinical Evaluation of a Loop-Mediated Amplification Kit for Diagnosis of Imported Malaria
Polley, Spencer D.; González, Iveth J.; Mohamed, Deqa; Daly, Rosemarie; Bowers, Kathy; Watson, Julie; Mewse, Emma; Armstrong, Margaret; Gray, Christen; Perkins, Mark D.; Bell, David; Kanda, Hidetoshi; Tomita, Norihiro; Kubota, Yutaka; Mori, Yasuyoshi; Chiodini, Peter L.; Sutherland, Colin J.
2013-01-01
Background. Diagnosis of malaria relies on parasite detection by microscopy or antigen detection; both fail to detect low-density infections. New tests providing rapid, sensitive diagnosis with minimal need for training would enhance both malaria diagnosis and malaria control activities. We determined the diagnostic accuracy of a new loop-mediated amplification (LAMP) kit in febrile returned travelers. Methods. The kit was evaluated in sequential blood samples from returned travelers sent for pathogen testing to a specialist parasitology laboratory. Microscopy was performed, and then malaria LAMP was performed using Plasmodium genus and Plasmodium falciparum–specific tests in parallel. Nested polymerase chain reaction (PCR) was performed on all samples as the reference standard. Primary outcome measures for diagnostic accuracy were sensitivity and specificity of LAMP results, compared with those of nested PCR. Results. A total of 705 samples were tested in the primary analysis. Sensitivity and specificity were 98.4% and 98.1%, respectively, for the LAMP P. falciparum primers and 97.0% and 99.2%, respectively, for the Plasmodium genus primers. Post hoc repeat PCR analysis of all 15 tests with discrepant results resolved 4 results in favor of LAMP, suggesting that the primary analysis had underestimated diagnostic accuracy. Conclusions. Malaria LAMP had a diagnostic accuracy similar to that of nested PCR, with a greatly reduced time to result, and was superior to expert microscopy. PMID:23633403
Turnes, Juan; Domínguez-Hernández, Raquel; Casado, Miguel Ángel
To evaluate the cost-effectiveness of a strategy based on direct-acting antivirals (DAAs) following the marketing of simeprevir and sofosbuvir (post-DAA) versus a pre-direct-acting antiviral strategy (pre-DAA) in patients with chronic hepatitis C, from the perspective of the Spanish National Health System. A decision tree combined with a Markov model was used to estimate the direct health costs (€, 2016) and health outcomes (quality-adjusted life years, QALYs) throughout the patient's life, with an annual discount rate of 3%. The sustained virological response, percentage of patients treated or not treated in each strategy, clinical characteristics of the patients, annual likelihood of transition, costs of treating and managing the disease, and utilities were obtained from the literature. The cost-effectiveness analysis was expressed as an incremental cost-effectiveness ratio (incremental cost per QALY gained). A deterministic sensitivity analysis and a probabilistic sensitivity analysis were performed. The post-DAA strategy showed higher health costs per patient (€30,944 vs. €23,707) than the pre-DAA strategy. However, it was associated with an increase of QALYs gained (15.79 vs. 12.83), showing an incremental cost-effectiveness ratio of €2,439 per QALY. The deterministic sensitivity analysis and the probabilistic sensitivity analysis showed the robustness of the results, with the post-DAA strategy being cost-effective in 99% of cases compared to the pre-DAA strategy. Compared to the pre-DAA strategy, the post-DAA strategy is efficient for the treatment of chronic hepatitis C in Spain, resulting in a much lower cost per QALY than the efficiency threshold used in Spain (€30,000 per QALY). Copyright © 2017 Elsevier España, S.L.U., AEEH y AEG. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. Thismore » report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.« less
Investigating and understanding fouling in a planar setup using ultrasonic methods.
Wallhäusser, E; Hussein, M A; Becker, T
2012-09-01
Fouling is an unwanted deposit on heat transfer surfaces and occurs regularly in foodstuff heat exchangers. Fouling causes high costs because cleaning of heat exchangers has to be carried out and cleaning success cannot easily be monitored. Thus, used cleaning cycles in foodstuff industry are usually too long leading to high costs. In this paper, a setup is described with which it is possible, first, to produce dairy protein fouling similar to the one found in industrial heat exchangers and, second, to detect the presence and absence of such fouling using an ultrasonic based measuring method. The developed setup resembles a planar heat exchanger in which fouling can be made and cleaned reproducible. Fouling presence, absence, and cleaning progress can be monitored by using an ultrasonic detection unit. The setup is described theoretically based on electrical and mechanical lumped circuits to derive the wave equation and the transfer function to perform a sensitivity analysis. Sensitivity analysis was done to determine influencing quantities and showed that fouling is measurable. Also, first experimental results are compared with results from sensitivity analysis.
Colorado River basin sensitivity to disturbance impacts
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing, dependent on the strength and direction of the forcing. These results indicate the importance of understanding model sensitivities under disturbance impacts to manage these shifts; plan for future water resource changes and determine how the impacts will affect the sustainability and adaptability of food-energy-water systems.
Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J
2016-01-01
(1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Validation of insulin sensitivity and secretion indices derived from the liquid meal tolerance test.
Maki, Kevin C; Kelley, Kathleen M; Lawless, Andrea L; Hubacher, Rachel L; Schild, Arianne L; Dicklin, Mary R; Rains, Tia M
2011-06-01
A liquid meal tolerance test (LMTT) has been proposed as a useful alternative to more labor-intensive methods of assessing insulin sensitivity and secretion. This substudy, conducted at the conclusion of a randomized, double-blind crossover trial, compared insulin sensitivity indices from a LMTT (Matsuda insulin sensitivity index [MISI] and LMTT disposition index [LMTT-DI]) with indices derived from minimal model analysis of results from the insulin-modified intravenous glucose tolerance test (IVGTT) (insulin sensitivity index [S(I)] and disposition index [DI]). Participants included men (n = 16) and women (n = 8) without diabetes but with increased abdominal adiposity (waist circumference ≥102 cm and ≥89 cm, respectively) and mean age of 48.9 years. The correlation between S(I) and the MISI was 0.776 (P < 0.0001). The respective associations between S(I) and MISI with waist circumference (r = -0.445 and -0.554, both P < 0.05) and body mass index were similar (r = -0.500 and -0.539, P < 0.05). The correlation between DI and LMTT-DI was 0.604 (P = 0.002). These results indicate that indices of insulin sensitivity and secretion derived from the LMTT correlate well with those from the insulin-modified IVGTT with minimal model analysis, suggesting that they may be useful for application in clinical and population studies of glucose homeostasis.
Hou, Lan-Gong; Zou, Song-Bing; Xiao, Hong-Lang; Yang, Yong-Gang
2013-01-01
The standardized FAO56 Penman-Monteith model, which has been the most reasonable method in both humid and arid climatic conditions, provides reference evapotranspiration (ETo) estimates for planning and efficient use of agricultural water resources. And sensitivity analysis is important in understanding the relative importance of climatic variables to the variation of reference evapotranspiration. In this study, a non-dimensional relative sensitivity coefficient was employed to predict responses of ETo to perturbations of four climatic variables in the Ejina oasis northwest China. A 20-year historical dataset of daily air temperature, wind speed, relative humidity and daily sunshine duration in the Ejina oasis was used in the analysis. Results have shown that daily sensitivity coefficients exhibited large fluctuations during the growing season, and shortwave radiation was the most sensitive variable in general for the Ejina oasis, followed by air temperature, wind speed and relative humidity. According to this study, the response of ETo can be preferably predicted under perturbation of air temperature, wind speed, relative humidity and shortwave radiation by their sensitivity coefficients.
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo
2015-04-01
Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.
Dunphy, C H; Polski, J M; Evans, H L; Gardner, L J
2001-08-01
Immunophenotyping of bone marrow (BM) specimens with acute myelogenous leukemia (AML) may be performed by flow cytometric (FC) or immunohistochemical (IH) techniques. Some markers (CD34, CD15, and CD117) are available for both techniques. Myeloperoxidase (MPO) analysis may be performed by enzyme cytochemical (EC) or IH techniques. To determine the reliability of these markers and MPO by these techniques, we designed a study to compare the results of analyses of these markers and MPO by FC (CD34, CD15, and CD117), EC (MPO), and IH (CD34, CD15, CD117, and MPO) techniques. Twenty-nine AMLs formed the basis of the study. These AMLs all had been immunophenotyped previously by FC analysis; 27 also had had EC analysis performed. Of the AMLs, 29 had BM core biopsies and 26 had BM clots that could be evaluated. The paraffin blocks of the 29 BM core biopsies and 26 BM clots were stained for CD34, CD117, MPO, and CD15. These results were compared with results by FC analysis (CD34, CD15, and CD117) and EC analysis (MPO). Immunodetection of CD34 expression in AML had a similar sensitivity by FC and IH techniques. Immunodetection of CD15 and CD117 had a higher sensitivity by FC analysis than by IH analysis. Detection of MPO by IH analysis was more sensitive than by EC analysis. There was no correlation of French-American-British (FAB) subtype of AML with CD34 or CD117 expression. Expression of CD15 was associated with AMLs with a monocytic component. Myeloperoxidase reactivity by IH analysis was observed in AMLs originally FAB subtyped as M0. CD34 can be equally detected by FC and IH techniques. CD15 and CD117 are better detected by FC analysis and MPO is better detected by IH analysis.
[Health economics analysis of specific immunotherapy in allergic rhinitis accompanied with asthma].
Chen, Jianjun; Xiang, Jisheng; Wang, Yanjun; Shi, Qiumei; Tan, Huifang; Kong, Weijia
2013-09-01
To investigate the cost-effectiveness of standardized specific immunotherapy (SIT) for allergic rhinitis patients accompanied with asthma (ARAS) in China. Forty ARAS patients sensitized with house dust mite (HDM) were administered with SIT (SIT group) or merely medicine treatment (control group). Alutard dermatophagoides pteronyssinus vaccine from ALK company was used for immunotherapy. The usage of symptom control medicine was according to the ARIA and GINA guideline. Cost-effectiveness ratio (CER) and Incremental cost-effectiveness ratio(ICER) analysis was conducted. The effectiveness was measured in terms of symptom scores, quality of life, objective improvement of rhinitis and asthma. Sensitive analysis was conducted to verify the stability of the results. The cost of SIT group for 1 year (6578 yuan) was higher than that of control group (1733.3 yuan), while the cost-effectiveness ratio and incremental cost-effectiveness ratio of SIT group were significant better than that of control group in all items. CER was 1686.7 yuan in SIT group compared with 3466.6 yuan in control group for nasal symptom scores, 4698.6 yuan in SIT group compared with 5777.8 yuan in control group for asthma symptom scores, 3462.1 yuan in SIT group compared with 8666.7 yuan in control group. The sensitive analysis of the price 10 percent higher or lower showed the same results. The cost-effectiveness of specific immunotherapy (SIT) for mite sensitized ARAS patients was better than that of merely medicine treatment.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Sensitivity of charge transport measurements to local inhomogeneities
NASA Astrophysics Data System (ADS)
Koon, Daniel; Wang, Fei; Hjorth Petersen, Dirch; Hansen, Ole
2012-02-01
We derive analytic expressions for the sensitivity of resistive and Hall measurements to local variations in a specimen's material properties in the combined linear limit of both small magnetic fields and small perturbations, presenting exact, algebraic expressions both for four-point probe measurements on an infinite plane and for symmetric, circular van der Pauw discs. We then generalize the results to obtain corrections to the sensitivities both for finite magnetic fields and for finite perturbations. Calculated functions match published results and computer simulations, and provide an intuitive, visual explanation for experimental misassignment of carrier type in n-type ZnO and agree with published experimental results for holes in a uniform material. These results simplify calculation and plotting of the sensitivities on an NxN grid from a problem of order N^5 to one of order N^3 in the arbitrary case and of order N^2 in the handful of cases that can be solved exactly, putting a powerful tool for inhomogeneity analysis in the hands of the researcher: calculation of the sensitivities requires little more than the solution of Laplace's equation on the specimen geometry.
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Choi, William; Tong, Xiuli; Cain, Kate
2016-08-01
This 1-year longitudinal study examined the role of Cantonese lexical tone sensitivity in predicting English reading comprehension and the pathways underlying their relation. Multiple measures of Cantonese lexical tone sensitivity, English lexical stress sensitivity, Cantonese segmental phonological awareness, general auditory sensitivity, English word reading, and English reading comprehension were administered to 133 Cantonese-English unbalanced bilingual second graders. Structural equation modeling analysis identified transfer of Cantonese lexical tone sensitivity to English reading comprehension. This transfer was realized through a direct pathway via English stress sensitivity and also an indirect pathway via English word reading. These results suggest that prosodic sensitivity is an important factor influencing English reading comprehension and that it needs to be incorporated into theoretical accounts of reading comprehension across languages. Copyright © 2016 Elsevier Inc. All rights reserved.
Attachment Insecurity Predicts Punishment Sensitivity in Anorexia Nervosa.
Keating, Charlotte; Castle, David J; Newton, Richard; Huang, Chia; Rossell, Susan L
2016-10-01
Individuals with anorexia nervosa (AN) experience insecure attachment. We investigated whether insecure attachment is associated with punishment and reward sensitivity in women with AN. Women with AN (n = 24) and comparison women (n = 26) (CW) completed The Eating Disorder Examination Questionnaire, Depression Anxiety Stress Scale, The Attachment Style Questionnaire, and Sensitivity to Punishment/Sensitivity to Reward Questionnaire. Participants with AN returned higher ratings for insecure attachment (anxious and avoidant) experiences and greater sensitivity to punishment (p = 0.001) than CW. In AN, sensitivity to punishment was positively correlated with anxious attachment and negative emotionality but not eating disorder symptoms. Regression analysis revealed that anxious attachment independently predicted punishment sensitivity in AN. Anxious attachment experiences are related to punishment sensitivity in AN, independent of negative emotionality and eating disorder symptoms. Results support ongoing investigation of the contribution of attachment experiences in treatment and recovery.
Cifuentes, Ricardo A; Murillo-Rojas, Juan; Avella-Vargas, Esperanza
2016-03-03
In the search to prevent hemorrhages associated with anticoagulant therapy, a major goal is to validate predictors of sensitivity to warfarin. However, previous studies in Colombia that included polymorphisms in the VKORC1 and CYP2C9 genes as predictors reported different algorithm performances to explain dose variations, and did not evaluate the prediction of sensitivity to warfarin. To determine the accuracy of the pharmacogenetic analysis, which includes the CYP2C9 *2 and *3 and VKORC1 1639G>A polymorphisms in predicting patients' sensitivity to warfarin at the Hospital Militar Central, a reference center for patients born in different parts of Colombia. Demographic and clinical data were obtained from 130 patients with stable doses of warfarin for more than two months. Next, their genotypes were obtained through a melting curve analysis. After verifying the Hardy-Weinberg equilibrium of the genotypes from the polymorphisms, a statistical analysis was done, which included multivariate and predictive approaches. A pharmacogenetic model that explained 52.8% of dose variation (p<0.001) was built, which was only 4% above the performance resulting from the same data using the International Warfarin Pharmacogenetics Consortium algorithm. The model predicting the sensitivity achieved an accuracy of 77.8% and included age (p=0.003), polymorphisms *2 and *3 (p=0.002) and polymorphism 1639G>A (p<0.001) as predictors. These results in a mixed population support the prediction of sensitivity to warfarin based on polymorphisms in VKORC1 and CYP2C9 as a valid approach in Colombian patients.
Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen
2011-11-01
Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Alonso-Contes, C.; Gerber, S.; Bliznyuk, N.; Duerr, I.
2017-12-01
Wetlands contribute approximately 20 to 40 % to global sources of methane emissions. We build a Methane model for tropical and subtropical forests, that allows inundated conditions, following the approaches used in more complex global biogeochemical emission models (LPJWhyMe and CLM4Me). The model was designed to replace model formulations with field and remotely sensed collected data for 2 essential drivers: plant productivity and hydrology. This allows us to directly focus on the central processes of methane production, consumption and transport. One of our long term goals is to make the model available to a scientists interested in including methane modeling in their location of study. Sensitivity analysis results help in focusing field data collection efforts. Here, we present results from a pilot global sensitivity analysis of the model order to determine which parameters and processes contribute most to the model's uncertainty of methane emissions. Results show that parameters related to water table behavior, carbon input (in form of plant productivity) and rooting depth affect simulated methane emissions the most. Current efforts include to perform the sensitivity analysis again on methane emissions outputs from an updated model that incorporates a soil heat flux routine and to determine the extent by which the soil temperature parameters affect CH4 emissions. Currently we are conducting field collection of data during Summer 2017 for comparison among 3 different landscapes located in the Ordway-Swisher Biological Station in Melrose, FL. We are collecting soil moisture and CH4 emission data from 4 different wetland types. Having data from 4 wetland types allows for calibration of the model to diverse soil, water and vegetation characteristics.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.
ERIC Educational Resources Information Center
Rich, Joseph R.; Boudreau, John W.
1987-01-01
Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…
Local sensitivity of per-recruit fishing mortality reference points.
Cadigan, N G; Wang, S
2016-12-01
We study the sensitivity of fishery management per-recruit harvest rates which may be part of a quantitative harvest strategy designed to achieve some objective for catch or population size. We use a local influence sensitivity analysis to derive equations that describe how these reference harvest rates are affected by perturbations to productivity processes. These equations give a basic theoretical understanding of sensitivity that can be used to predict what the likely impacts of future changes in productivity will be. Our results indicate that per-recruit reference harvest rates are more sensitive to perturbations when the equilibrium catch or population size per recruit, as functions of the harvest rate, have less curvature near the reference point. Overall our results suggest that per recruit reference points will, with some exceptions, usually increase if (1) growth rates increase, (2) natural mortality rates increase, or (3) fishery selectivity increases to an older age.
Sensitivity Enhancement of FBG-Based Strain Sensor.
Li, Ruiya; Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Li, Tianliang; Mao, Jian
2018-05-17
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments.
Sensitivity Enhancement of FBG-Based Strain Sensor
Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Mao, Jian
2018-01-01
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments. PMID:29772826
A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves
NASA Astrophysics Data System (ADS)
Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.
2012-04-01
The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.
NASA Technical Reports Server (NTRS)
1971-01-01
The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.
Experiential training for enhancing intercultural sensitivity.
Jain, Sachin
2013-01-01
This project aims to enhance intercultural sensitivity using cross-cultural movies and focused group discussions with invited guests. Both treatment and control groups consisted of 9 Caucasian participants. The researcher conducted 8 group sessions with the participants of treatment group. Pre and post intervention data were collected on the Intercultural Sensitivity Scale. Results show that there was a significant increase in the participants' scores in the treatment group and not a significant difference in participants' pre and post scores in the control group. Further analysis on the five different dimensions of the Intercultural Sensitivity Scale was also conducted.
NASA Astrophysics Data System (ADS)
Chen, Libin; Yang, Zhifeng; Liu, Haifei
2017-12-01
Inter-basin water transfers containing a great deal of nitrogen are great threats to human health, biodiversity, and air and water quality in the recipient area. Danjiangkou Reservoir, the source reservoir for China's South-to-North Water Diversion Middle Route Project, suffers from total nitrogen pollution and threatens the water transfer to a number of metropolises including the capital, Beijing. To locate the main source of nitrogen pollution into the reservoir, especially near the Taocha canal head, where the intake of water transfer begins, we constructed a 3-D water quality model. We then used an inflow sensitivity analysis method to analyze the significance of inflows from each tributary that may contribute to the total nitrogen pollution and affect water quality. The results indicated that the Han River was the most significant river with a sensitivity index of 0.340, followed by the Dan River with a sensitivity index of 0.089, while the Guanshan River and the Lang River were not significant, with the sensitivity indices of 0.002 and 0.001, respectively. This result implies that the concentration and amount of nitrogen inflow outweighs the geographical position of the tributary for sources of total nitrogen pollution to the Taocha canal head of the Danjiangkou Reservoir.
High frequency QRS ECG predicts ischemic defects during myocardial perfusion imaging
NASA Technical Reports Server (NTRS)
Rahman, Atiar
2006-01-01
Background: Changes in high frequency QRS components of the electrocardiogram (HF QRS ECG) (150-250 Hz) are more sensitive than changes in conventional ST segments for detecting myocardial ischemia. We investigated the accuracy of 12-lead HF QRS ECG in detecting ischemia during adenosine tetrofosmin myocardial perfusion imaging (MPI). Methods and Results: 12-lead HF QRS ECG recordings were obtained from 45 patients before and during adenosine technetium-99 tetrofosmin MPI tests. Before the adenosine infusions, recordings of HF QRS were analyzed according to a morphological score that incorporated the number, type and location of reduced amplitude zones (RAZs) present in the 12 leads. During the adenosine infusions, recordings of HF QRS were analyzed according to the maximum percentage changes (in both the positive and negative directions) that occurred in root mean square (RMS) voltage amplitudes within the 12 leads. The best set of prospective HF QRS criteria had a sensitivity of 94% and a specificity of 83% for correctly identifying the MPI result. The sensitivity of simultaneous ST segment changes (18%) was significantly lower than that of any individual HF QRS criterion (P<0.001). Conclusions: Analysis of 12-lead HF QRS ECG is highly sensitive and specific for detecting ischemic perfusion defects during adenosine MPI stress tests and significantly more sensitive than analysis of conventional ST segments.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Zhang, Yang; Shen, Jing; Li, Yu
2018-01-01
Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management. PMID:29342852
Zhao, Yueyuan; Zhang, Xuefeng; Zhu, Fengcai; Jin, Hui; Wang, Bei
2016-08-02
Objective To estimate the cost-effectiveness of hepatitis E vaccination among pregnant women in epidemic regions. Methods A decision tree model was constructed to evaluate the cost-effectiveness of 3 hepatitis E virus vaccination strategies from societal perspectives. The model parameters were estimated on the basis of published studies and experts' experience. Sensitivity analysis was used to evaluate the uncertainties of the model. Results Vaccination was more economically effective on the basis of the incremental cost-effectiveness ratio (ICER< 3 times China's per capital gross domestic product/quality-adjusted life years); moreover, screening and vaccination had higher QALYs and lower costs compared with universal vaccination. No parameters significantly impacted ICER in one-way sensitivity analysis, and probabilistic sensitivity analysis also showed screening and vaccination to be the dominant strategy. Conclusion Screening and vaccination is the most economical strategy for pregnant women in epidemic regions; however, further studies are necessary to confirm the efficacy and safety of the hepatitis E vaccines.
Zhang, Yang; Shen, Jing; Li, Yu
2018-01-13
Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee
2007-05-08
A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.
Janssen, Ellen M; Jerome, Gerald J; Dalcin, Arlene T; Gennusa, Joseph V; Goldsholl, Stacy; Frick, Kevin D; Wang, Nae-Yuh; Appel, Lawrence J; Daumit, Gail L
2017-06-01
In the ACHIEVE randomized controlled trial, an 18-month behavioral intervention accomplished weight loss in persons with serious mental illness who attended community psychiatric rehabilitation programs. This analysis estimates costs for delivering the intervention during the study. It also estimates expected costs to implement the intervention more widely in a range of community mental health programs. Using empirical data, costs were calculated from the perspective of a community psychiatric rehabilitation program delivering the intervention. Personnel and travel costs were calculated using time sheet data. Rent and supply costs were calculated using rent per square foot and intervention records. A univariate sensitivity analysis and an expert-informed sensitivity analysis were conducted. With 144 participants receiving the intervention and a mean weight loss of 3.4 kg, costs of $95 per participant per month and $501 per kilogram lost in the trial were calculated. In univariate sensitivity analysis, costs ranged from $402 to $725 per kilogram lost. Through expert-informed sensitivity analysis, it was estimated that rehabilitation programs could implement the intervention for $68 to $85 per client per month. Costs of implementing the ACHIEVE intervention were in the range of other intensive behavioral weight loss interventions. Wider implementation of efficacious lifestyle interventions in community mental health settings will require adequate funding mechanisms. © 2017 The Obesity Society.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
NASA Technical Reports Server (NTRS)
Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo
1995-01-01
Premixed combustors, which are being considered for low NOx engines, are susceptible to instabilities due to feedback between pressure perturbations and combustion. This feedback can cause damaging mechanical vibrations of the system as well as degrade the emissions characteristics and combustion efficiency. In a lean combustor instabilities can also lead to blowout. A model was developed to perform linear combustion-acoustic stability analysis using detailed chemical kinetic mechanisms. The Lewis Kinetics and Sensitivity Analysis Code, LSENS, was used to calculate the sensitivities of the heat release rate to perturbations in density and temperature. In the present work, an assumption was made that the mean flow velocity was small relative to the speed of sound. Results of this model showed the regions of growth of perturbations to be most sensitive to the reflectivity of the boundary when reflectivities were close to unity.
Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles
NASA Astrophysics Data System (ADS)
Kaur, Ramanpreet; Sharma, Sapna
2018-06-01
Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.
Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2007-01-01
We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John
2012-01-01
BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808
Beltrame, Anna; Guerriero, Massimo; Angheben, Andrea; Gobbi, Federico; Requena-Mendez, Ana; Zammarchi, Lorenzo; Formenti, Fabio; Perandin, Francesca; Bisoffi, Zeno
2017-01-01
Background Schistosomiasis is a neglected infection affecting millions of people, mostly living in sub-Saharan Africa. Morbidity and mortality due to chronic infection are relevant, although schistosomiasis is often clinically silent. Different diagnostic tests have been implemented in order to improve screening and diagnosis, that traditionally rely on parasitological tests with low sensitivity. Aim of this study was to evaluate the accuracy of different tests for the screening of schistosomiasis in African migrants, in a non endemic setting. Methodology/Principal findings A retrospective study was conducted on 373 patients screened at the Centre for Tropical Diseases (CTD) in Negrar, Verona, Italy. Biological samples were tested with: stool/urine microscopy, Circulating Cathodic Antigen (CCA) dipstick test, ELISA, Western blot, immune-chromatographic test (ICT). Test accuracy and predictive values of the immunological tests were assessed primarily on the basis of the results of microscopy (primary reference standard): ICT and WB resulted the test with highest sensitivity (94% and 92%, respectively), with a high NPV (98%). CCA showed the highest specificity (93%), but low sensitivity (48%). The analysis was conducted also using a composite reference standard, CRS (patients classified as infected in case of positive microscopy and/or at least 2 concordant positive immunological tests) and Latent Class Analysis (LCA). The latter two models demonstrated excellent agreement (Cohen’s kappa: 0.92) for the classification of the results. In fact, they both confirmed ICT as the test with the highest sensitivity (96%) and NPV (97%), moreover PPV was reasonably good (78% and 72% according to CRS and LCA, respectively). ELISA resulted the most specific immunological test (over 99%). The ICT appears to be a suitable screening test, even when used alone. Conclusions The rapid test ICT was the most sensitive test, with the potential of being used as a single screening test for African migrants. PMID:28582412
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Spectral sensitivity characteristics simulation for silicon p-i-n photodiode
NASA Astrophysics Data System (ADS)
Urchuk, S. U.; Legotin, S. A.; Osipov, U. V.; Elnikov, D. S.; Didenko, S. I.; Astahov, V. P.; Rabinovich, O. I.; Yaromskiy, V. P.; Kuzmina, K. A.
2015-11-01
In this paper the simulation results of the spectral sensitivity characteristics of silicon p-i-n-photodiodes are presented. The analysis of the characteristics of the semiconductor material (the doping level, lifetime, surface recombination velocity), the construction and operation modes on the characteristics of photosensitive structures in order to optimize them was carried out.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
Variation of a test’s sensitivity and specificity with disease prevalence
Leeflang, Mariska M.G.; Rutjes, Anne W.S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M.M.
2013-01-01
Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation. PMID:23798453
Santos, Marcelo R Dos; Sayegh, Ana L C; Armani, Rafael; Costa-Hong, Valéria; Souza, Francis R de; Toschi-Dias, Edgar; Bortolotto, Luiz A; Yonamine, Mauricio; Negrão, Carlos E; Alves, Maria-Janieire N N
2018-05-21
Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population.
Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer
NASA Astrophysics Data System (ADS)
Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.
2018-03-01
Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.
Almeida, Suzana C; George, Steven Z; Leite, Raquel D V; Oliveira, Anamaria S; Chaves, Thais C
2018-05-17
We aimed to empirically derive psychosocial and pain sensitivity subgroups using cluster analysis within a sample of individuals with chronic musculoskeletal pain (CMP) and to investigate derived subgroups for differences in pain and disability outcomes. Eighty female participants with CMP answered psychosocial and disability scales and were assessed for pressure pain sensitivity. A cluster analysis was used to derive subgroups, and analysis of variance (ANOVA) was used to investigate differences between subgroups. Psychosocial factors (kinesiophobia, pain catastrophizing, anxiety, and depression) and overall pressure pain threshold (PPT) were entered into the cluster analysis. Three subgroups were empirically derived: cluster 1 (high pain sensitivity and high psychosocial distress; n = 12) characterized by low overall PPT and high psychosocial scores; cluster 2 (high pain sensitivity and intermediate psychosocial distress; n = 39) characterized by low overall PPT and intermediate psychosocial scores; and cluster 3 (low pain sensitivity and low psychosocial distress; n = 29) characterized by high overall PPT and low psychosocial scores compared to the other subgroups. Cluster 1 showed higher values for mean pain intensity (F (2,77) = 10.58, p < 0.001) compared with cluster 3, and cluster 1 showed higher values for disability (F (2,77) = 3.81, p = 0.03) compared with both clusters 2 and 3. Only cluster 1 was distinct from cluster 3 according to both pain and disability outcomes. Pain catastrophizing, depression, and anxiety were the psychosocial variables that best differentiated the subgroups. Overall, these results call attention to the importance of considering pain sensitivity and psychosocial variables to obtain a more comprehensive characterization of CMP patients' subtypes.
Aeroassisted orbit transfer vehicle trajectory analysis
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Suit, William T.
1988-01-01
The emphasis in this study was on the use of multiple pass trajectories for aerobraking. However, for comparison, single pass trajectories, trajectories using ballutes, and trajectories corrupted by atmospheric anomolies were run. A two-pass trajectory was chosen to determine the relation between sensitivity to errors and payload to orbit. Trajectories that used only aerodynamic forces for maneuvering could put more weight into the target orbits but were very sensitive to variations from the planned trajectors. Using some thrust control resulted in less payload to orbit, but greatly reduced the sensitivity to variations from nominal trajectories. When compared to the non-thrusting trajectories investigated, the judicious use of thrusting resulted in multiple pass trajectories that gave 97 percent of the payload to orbit with almost none of the sensitivity to variations from the nominal.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F
2009-06-01
Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars
Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.
2005-01-01
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10–600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment. PMID:15657130
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars.
Skelley, Alison M; Scherer, James R; Aubrey, Andrew D; Grover, William H; Ivester, Robin H C; Ehrenfreund, Pascale; Grunthaner, Frank J; Bada, Jeffrey L; Mathies, Richard A
2005-01-25
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10-600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Characterization of screen-printed dye-sensitized nanocrystalline TiO2 solar cells
NASA Astrophysics Data System (ADS)
Gupta, Tapan K.; Cirignano, Leonard J.; Shah, Kanai S.; Moy, Larry P.; Kelly, David J.; Squillante, Michael R.; Entine, Gerald; Smestad, Greg P.
1999-10-01
Titanium dioxide (TiO2) films have been deposited on SnO2 coated glass substrates by screen-printing. Film morphology and structure have been characterized by scanning electron microscopy, x-ray diffraction and BET analysis. Dye-sensitized TiO2 photoelectrochemical cells have been assembled and characterized. Cells sensitized with anthocyanin and a ruthenium complex have been investigated. A 0.77 cm2 ruthenium dye sensitized cell with 6.1% power conversion efficiency under Air Mass (AM1.5) conditions was obtained. Results obtained with a pure anthocyanin dye and dye extracted from blackberries were compared. Finally, a natural gel was found to improve the stability of anthocyanin sensitized cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Bhat, Kabekode Ghanasham
2017-07-18
We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.
Davis, Tyler; LaRocque, Karen F.; Mumford, Jeanette; Norman, Kenneth A.; Wagner, Anthony D.; Poldrack, Russell A.
2014-01-01
Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930
An approach to measure parameter sensitivity in watershed ...
Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
NASA Astrophysics Data System (ADS)
Hrubesova, E.; Lahuta, H.; Mohyla, M.; Quang, T. B.; Phi, N. D.
2018-04-01
The paper is focused on the sensitivity analysis of behaviour of the subsoil – foundation system as regards the variant properties of fibre-concrete slab resulting into different relative stiffness of the whole cooperating system. The character of slab and its properties are very important for the character of external load transfer, but the character of subsoil cannot be neglected either because it determines the stress-strain behaviour of the all system and consequently the bearing capacity of structure. The sensitivity analysis was carried out based on experimental results, which include both the stress values in soil below the foundation structure and settlements of structure, characterized by different quantity of fibres in it. Flat dynamometers GEOKON were used for the stress measurements below the observed slab, the strains inside slab were registered by tensometers, the settlements were monitored geodetically. The paper is focused on the comparison of soil stresses below the slab for different quantity of fibres in structure. The results obtained from the experimental stand can contribute to more objective knowledge of soil – slab interaction, to the evaluation of real carrying capacity of the slab, to the calibration of corresponding numerical models, to the optimization of quantity of fibres in the slab, and finally, to higher safety and more economical design of slab.
Iguchi, Hiroyoshi; Wada, Tadashi; Matsushita, Naoki; Oishi, Masahiro; Teranishi, Yuichi; Yamane, Hideo
2014-07-01
The accuracy and sensitivity of fine-needle aspiration cytology (FNAC) in this analysis were not satisfactory, and the false-negative rate seemed to be higher than for parotid tumours. The possibility of low-grade malignancy should be considered in the surgical treatment of accessory parotid gland (APG) tumours, even if the preoperative results of FNAC suggest that the tumour is benign. Little is known about the usefulness of FNAC in the preoperative evaluation of APG tumours, probably due to the paucity of APG tumour cases. We examined the usefulness of FNAC in the detection of malignant APG tumours. We conducted a retrospective analysis of 3 cases from our hospital, along with 18 previously reported Japanese cases. We compared the preoperative FNAC results with postoperative histopathological diagnoses of APG tumours and evaluated the accuracy, sensitivity, specificity and false-negative rates of FNAC in detecting malignant APG tumours. There were four false-negative cases (19.0%), three of mucoepidermoid carcinomas and one of malignant lymphoma. One false-positive result was noted in the case of a myoepithelioma, which was cytologically diagnosed as suspected adenoid cystic carcinoma. The accuracy, sensitivity and specificity of FNAC in detecting malignant tumours were 76.2%, 60.0% and 90.9%, respectively.
Nakabayashi, Ryo; Tsugawa, Hiroshi; Kitajima, Mariko; Takayama, Hiromitsu; Saito, Kazuki
2015-01-01
In metabolomics, the analysis of product ions in tandem mass spectrometry (MS/MS) is noteworthy to chemically assign structural information. However, the development of relevant analytical methods are less advanced. Here, we developed a method to boost sensitivity in liquid chromatography–Fourier transform ion cyclotron resonance–tandem mass spectrometry analysis (MS/MS boost analysis). To verify the MS/MS boost analysis, both quercetin and uniformly labeled 13C quercetin were analyzed, revealing that the origin of the product ions is not the instrument, but the analyzed compounds resulting in sensitive product ions. Next, we applied this method to the analysis of monoterpene indole alkaloids (MIAs). The comparative analyses of MIAs having indole basic skeleton (ajmalicine, catharanthine, hirsuteine, and hirsutine) and oxindole skeleton (formosanine, isoformosanine, pteropodine, isopteropodine, rhynchophylline, isorhynchophylline, and mitraphylline) identified 86 and 73 common monoisotopic ions, respectively. The comparative analyses of the three pairs of stereoisomers showed more than 170 common monoisotopic ions in each pair. This method was also applied to the targeted analysis of MIAs in Catharanthus roseus and Uncaria rhynchophylla to profile indole and oxindole compounds using the product ions. This analysis is suitable for chemically assigning features of the metabolite groups, which contributes to targeted metabolome analysis. PMID:26734034
2010-01-01
Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Whelen, A Christian; Bankowski, Matthew J; Furuya, Glenn; Honda, Stacey; Ueki, Robert; Chan, Amelia; Higa, Karen; Kumashiro, Diane; Moore, Nathaniel; Lee, Roland; Koyamatsu, Terrie; Effler, Paul V
2010-01-01
We integrated multicenter, real-time (RTi) reverse transcription polymerase chain reaction (RT-PCR) screening into a statewide laboratory algorithm for influenza surveillance and response. Each of three sites developed its own testing strategy and was challenged with one randomized and blinded panel of 50 specimens previously tested for respiratory viruses. Following testing, each participating laboratory reported its results to the Hawaii State Department of Health, State Laboratories Division for evaluation and possible discrepant analysis. Two of three laboratories reported a 100% sensitivity and specificity, resulting in a 100% positive predictive value and a 100% negative predictive value (NPV) for influenza type A. The third laboratory showed a 71% sensitivity for influenza type A (83% NPV) with 100% specificity. All three laboratories were 100% sensitive and specific for the detection of influenza type B. Discrepant analysis indicated that the lack of sensitivity experienced by the third laboratory may have been due to the analyte-specific reagent probe used by that laboratory. Use of a newer version of the product with a secondary panel of 20 specimens resulted in a sensitivity and specificity of 100%. All three laboratories successfully verified their ability to conduct clinical testing for influenza using diverse nucleic acid extraction and RTi RT-PCR platforms. Successful completion of the verification by all collaborating laboratories paved the way for the integration of those facilities into a statewide laboratory algorithm for influenza surveillance and response.
Analysis of the NAEG model of transuranic radionuclide transport and dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercher, J.R.; Anspaugh, L.R.
We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less
Moghtaderi, Mozhgan; Hosseini Teshnizi, Saeed; Farjadian, Shirin
2017-01-01
Various allergens are implicated in the pathogenesis of allergic diseases in different regions. This study attempted to identify the most common allergens among patients with allergies based on the results of skin prick tests in different parts of Iran. Relevant studies conducted from 2000 to 2016 were identified from the MEDLINE database. Six common groups of allergen types, including animal, cockroach, food, fungus, house dust mite, and pollen were considered. Subgroup analysis was performed to determine the prevalence of each type of allergen. The Egger test was used to assess publication bias. We included 44 studies in this meta-analysis. The overall prevalence of positive skin test results for at least one allergen was estimated to be 59% in patients with allergies in various parts of Iran. The number of patients was 11,646 (56% male and 44% female), with a mean age of 17.46±11.12 years. The most common allergen sources were pollen (47.0%), mites (35.2%), and food (15.3%). The prevalence of sensitization to food and cockroach allergens among children was greater than among adults. Pollen is the most common allergen sensitization in cities of Iran with a warm and dry climate; however, sensitization to house dust mites is predominant in northern and southern coastal areas of Iran.
Farjadian, Shirin
2017-01-01
Various allergens are implicated in the pathogenesis of allergic diseases in different regions. This study attempted to identify the most common allergens among patients with allergies based on the results of skin prick tests in different parts of Iran. Relevant studies conducted from 2000 to 2016 were identified from the MEDLINE database. Six common groups of allergen types, including animal, cockroach, food, fungus, house dust mite, and pollen were considered. Subgroup analysis was performed to determine the prevalence of each type of allergen. The Egger test was used to assess publication bias. We included 44 studies in this meta-analysis. The overall prevalence of positive skin test results for at least one allergen was estimated to be 59% in patients with allergies in various parts of Iran. The number of patients was 11,646 (56% male and 44% female), with a mean age of 17.46±11.12 years. The most common allergen sources were pollen (47.0%), mites (35.2%), and food (15.3%). The prevalence of sensitization to food and cockroach allergens among children was greater than among adults. Pollen is the most common allergen sensitization in cities of Iran with a warm and dry climate; however, sensitization to house dust mites is predominant in northern and southern coastal areas of Iran. PMID:28171712
Feng, Sheng; Shi, Jun; Parrott, Neil; Hu, Pei; Weber, Cornelia; Martin-Facklam, Meret; Saito, Tomohisa; Peck, Richard
2016-07-01
We propose a strategy for studying ethnopharmacology by conducting sequential physiologically based pharmacokinetic (PBPK) prediction (a 'bottom-up' approach) and population pharmacokinetic (popPK) confirmation (a 'top-down' approach), or in reverse order, depending on whether the purpose is ethnic effect assessment for a new molecular entity under development or a tool for ethnic sensitivity prediction for a given pathway. The strategy is exemplified with bitopertin. A PBPK model was built using Simcyp(®) to simulate the pharmacokinetics of bitopertin and to predict the ethnic sensitivity in clearance, given pharmacokinetic data in just one ethnicity. Subsequently, a popPK model was built using NONMEM(®) to assess the effect of ethnicity on clearance, using human data from multiple ethnic groups. A comparison was made to confirm the PBPK-based ethnic sensitivity prediction, using the results of the popPK analysis. PBPK modelling predicted that the bitopertin geometric mean clearance values after 20 mg oral administration in Caucasians would be 1.32-fold and 1.27-fold higher than the values in Chinese and Japanese, respectively. The ratios of typical clearance in Caucasians to the values in Chinese and Japanese estimated by popPK analysis were 1.20 and 1.17, respectively. The popPK analysis results were similar to the PBPK modelling results. As a general framework, we propose that PBPK modelling should be considered to predict ethnic sensitivity of pharmacokinetics prior to any human data and/or with data in only one ethnicity. In some cases, this will be sufficient to guide initial dose selection in different ethnicities. After clinical trials in different ethnicities, popPK analysis can be used to confirm ethnic differences and to support dose justification and labelling. PBPK modelling prediction and popPK analysis confirmation can complement each other to assess ethnic differences in pharmacokinetics at different drug development stages.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Reinforcement Sensitivity and Social Anxiety in Combat Veterans
Kimbrel, Nathan A.; Meyer, Eric C.; DeBeer, Bryann B.; Mitchell, John T.; Kimbrel, Azure D.; Nelson-Gray, Rosemery O.; Morissette, Sandra B.
2017-01-01
Objective The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Method Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. Results As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. Conclusion The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity. PMID:28966424
El-Osta, Hazem; Jani, Pushan; Mansour, Ali; Rascoe, Philip; Jafri, Syed
2018-04-23
An accurate assessment of the mediastinal lymph nodes status is essential in the staging and treatment planning of potentially resectable non-small cell lung cancer (NSCLC). We performed this meta-analysis to evaluate the role of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) in detecting occult mediastinal disease in NSCLC with no radiologic mediastinal involvement. The PubMed, Embase, and Cochrane libraries were searched for studies describing the role of EBUS-TBNA in lung cancer patients with radiologically negative mediastinum. The individual and pooled sensitivity, prevalence, negative predictive value (NPV), and diagnostic odds ratio (DOR) were calculated using the random effects model. Metaregression analysis, heterogeneity, and publication bias were also assessed. A total of 13 studies that met the inclusion criteria were included in the meta-analysis. The pooled effect size of the different diagnostic parameters were estimated as follows: prevalence, 12.8% (95% CI, 10.4%-15.7%); sensitivity, 49.5% (95% confidence interval [CI], 36.4%-62.6%); NPV, 93.0% (95% CI, 90.3%-95.0%); and log DOR, 5.069 (95% CI, 4.212-5.925). Significant heterogeneity was noticeable for the sensitivity, disease prevalence, and NPV, but not observed for log DOR. Publication bias was detected for sensitivity, NPV and log DOR but not for prevalence. Bivariate meta-regression analysis showed no significant association between the pooled calculated parameters and the type of anesthesia, imaging utilized to define negative mediastinum, rapid on-site test usage, and presence of bias by QUADAS-2 tool. Interestingly, we observed a greater sensitivity, NPV and log DOR for studies published prior to 2010, and for prospective multicenter studies. Among NSCLC patients with a radiologically normal mediastinum, the prevalence of mediastinal disease is 12.8% and the sensitivity of EBUS-TBNA is 49.5%. Despite the low sensitivity, the resulting NPV of 93.0% for EBUS-TBNA suggests that mediastinal metastasis is uncommon in such patients.
Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J
2015-06-15
Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H
2017-12-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
NASA Astrophysics Data System (ADS)
Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.
2012-12-01
Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
Logistic Map for Cancellable Biometrics
NASA Astrophysics Data System (ADS)
Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr
2017-08-01
This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.
Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism
Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F
2017-01-01
Objective The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Methods Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). Results The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). Conclusion In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. Trial registration number NCT00986154. PMID:28689179
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W
2009-01-01
Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.
ERIC Educational Resources Information Center
Kleppinger, E. W.; And Others
1984-01-01
Although determination of phosphorus is important in biology, physiology, and environmental science, traditional gravimetric and colorimetric methods are cumbersome and lack the requisite sensitivity. Therefore, a derivative activation analysis method is suggested. Background information, procedures, and results are provided. (JN)
Separation and Analysis of Citral Isomers.
ERIC Educational Resources Information Center
Sacks, Jeff; And Others
1983-01-01
Provides background information, procedures, and results of an experiments designed to introduce undergraduates to the technique of steam distillation as a means of isolating thermally sensitive compounds. Chromatographic techniques (HPLC) and mass spectrometric analysis are used in the experiment which requires three laboratory periods. (JN)
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
First status report on regional ground-water flow modeling for the Paradox Basin, Utah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, R.W.
1984-05-01
Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less
Kondo, Takashi; Kobayashi, Daisuke; Mochizuki, Maki; Asanuma, Kouichi; Takahashi, Satoshi
2017-01-01
Background Recently developed reagents for the highly sensitive measurement of cardiac troponin I are useful for early diagnosis of acute coronary syndrome. However, differences in measured values between these new reagents and previously used reagents have not been well studied. In this study, we aimed to compare the values between ARCHITECT High-Sensitive Troponin I ST (newly developed reagents), ARCHITECT Troponin I ST and STACIA CLEIA cardiac troponin I (two previously developed reagent kits). Methods Gel filtration high-performance liquid chromatography was used to analyse the causes of differences in measured values. Results The measured values differed between ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cardiac troponin I reagents (r = 0.82). Cross-reactivity tests using plasma with added skeletal-muscle troponin I resulted in higher reactivity (2.17-3.03%) for the STACIA CLEIA cardiac troponin I reagents compared with that for the ARCHITECT High-Sensitive Troponin I ST reagents (less than 0.014%). In addition, analysis of three representative samples using gel filtration high-performance liquid chromatography revealed reagent-specific differences in the reactivity against each cardiac troponin I complex; this could explain the differences in values observed for some of the samples. Conclusion The newly developed ARCHITECT High-Sensitive Troponin I ST reagents were not affected by the presence of skeletal-muscle troponin I in the blood and may be useful for routine examinations.
Liu, Sha; Zhang, Fuquan; Wang, Xijin; Shugart, Yin Yao; Zhao, Yingying; Li, Xinrong; Liu, Zhifen; Sun, Ning; Yang, Chunxia; Zhang, Kerang; Yue, Weihua; Yu, Xin; Xu, Yong
2017-11-10
There is an increasing interest in searching biomarkers for schizophrenia (SZ) diagnosis, which overcomes the drawbacks inherent with the subjective diagnostic methods. MicroRNA (miRNA) fingerprints have been explored for disease diagnosis. We performed a meta-analysis to examine miRNA diagnostic value for SZ and further validated the meta-analysis results. Using following terms: schizophrenia/SZ, microRNA/miRNA, diagnosis, sensitivity and specificity, we searched databases restricted to English language and reviewed all articles published from January 1990 to October 2016. All extracted data were statistically analyzed and the results were further validated with peripheral blood mononuclear cells (PBMNCs) isolated from patients and healthy controls using RT-qPCR and receiver operating characteristic (ROC) analysis. A total of 6 studies involving 330 patients and 202 healthy controls were included for meta-analysis. The pooled sensitivity, specificity and diagnostic odds ratio were 0.81 (95% CI: 0.75-0.86), 0.81 (95% CI: 0.72-0.88) and 18 (95% CI: 9-34), respectively; the positive and negative likelihood ratio was 4.3 and 0.24 respectively; the area under the curve in summary ROC was 0.87 (95% CI: 0.84-0.90). Validation revealed that miR-181b-5p, miR-21-5p, miR-195-5p, miR-137, miR-346 and miR-34a-5p in PBMNCs had high diagnostic sensitivity and specificity in the context of schizophrenia. In conclusion, blood-derived miRNAs might be promising biomarkers for SZ diagnosis.
[A cost-effectiveness analysis on universal infant rotavirus vaccination strategy in China].
Sun, S L; Gao, Y Q; Yin, J; Zhuang, G H
2016-02-01
To evaluate the cost-effectiveness of current universal infant rotavirus vaccination strategy, in China. Through constructing decision tree-Markov model, we simulated rotavirus diarrhea associated cost and health outcome on those newborns in 2012 regarding different vaccination programs as: group with no vaccination, Rotavirus vaccination group and Rotateq vaccination group, respectively. We determined the optimal program, based on the comparison between incremental cost-effectiveness ratio (ICER) and China' s 2012 per capital gross domestic product (GDP). Compared with non-vaccination group, the Rotavirus vaccination and Rotateq vaccination groups had to pay 3 760 Yuan and 7 578 Yuan (both less than 2012 GDP per capital) to avert one disability adjusted life years (DALY) loss, respectively. RESULTS from sensitivity analysis indicated that both results were robust. Compared with Rotavirus vaccination program, the Rotateq vaccination program had to pay extra 81 068 Yuan (between 1 and 3 times GDP per capital) to avert one DALY loss. Data from the sensitivity analysis indicated that the result was not robust. From the perspective of health economics, both two-dose Rotarix vaccine and three-dose' s Rotateq vaccine programs were highly cost-effective, when compared to the non-vaccination program. It was appropriate to integrate rotavirus vaccine into the routine immunization program. Considering the large amount of extra cost that had to spend on Rotateq vaccination program, results from the sensitivity analysis showed that it was not robust. Rotateq vaccine required one more dose than the Rotarix vaccine, to be effective. However, it appeared more difficult to practice, suggesting that it was better to choose the Rotarix vaccine, at current stage.
Gough, H; Luke, G A; Beeley, J A; Geddes, D A
1996-02-01
The aim of this project was to develop an analytical procedure with the required level of sensitivity for the determination of glucose concentrations in small volumes of unstimulated fasting whole saliva. The technique involves high-performance ion-exchange chromatography at high pH and pulsed amperometric detection. It has a high level of reproducibility, a sensitivity as low as 0.1 mumol/l and requires only 50 microliters samples (sensitivity = 0.002 pmol). Inhibition of glucose metabolism, by procedures such as collection into 0.1% (w/v) sodium fluoride, was shown to be essential if accurate results are to be obtained. Collection on to ice followed by storage at -20 degrees C was shown to be unsuitable and resulted in glucose loss by degradation. There were inter- and intraindividual variations in the glucose concentration in unstimulated mixed saliva (range; 0.02-0.4 mmol/l). The procedure can be used for the analysis of other salivary carbohydrates and for monitoring the clearance of dietary carbohydrates from the mouth.
A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma
NASA Technical Reports Server (NTRS)
Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.
2004-01-01
A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.
NASA Technical Reports Server (NTRS)
Butler, C. F.
1979-01-01
A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
Burmeister, T; Maurer, J; Aivado, M; Elmaagacli, A H; Grünebach, F; Held, K R; Hess, G; Hochhaus, A; Höppner, W; Lentes, K U; Lübbert, M; Schäfer, K L; Schafhausen, P; Schmidt, C A; Schüler, F; Seeger, K; Seelig, R; Thiede, C; Viehmann, S; Weber, C; Wilhelm, S; Christmann, A; Clement, J H; Ebener, U; Enczmann, J; Leo, R; Schleuning, M; Schoch, R; Thiel, E
2000-10-01
Here we describe the results of an interlaboratory test for RT-PCR-based BCR/ABL analysis. The test was organized in two parts. The number of participating laboratories in the first and second part was 27 and 20, respectively. In the first part samples containing various concentrations of plasmids with the ela2, b2a2 or b3a2 BCR/ABL transcripts were analyzed by PCR. In the second part of the test, cell samples containing various concentrations of BCR/ABL-positive cells were analyzed by RT-PCR. Overall PCR sensitivity was sufficient in approximately 90% of the tests, but a significant number of false positive results were obtained. There were significant differences in sensitivity in the cell-based analysis between the various participants. The results are discussed, and proposals are made regarding the choice of primers, controls, conditions for RNA extraction and reverse transcription.
Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-09-20
These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less
Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J
2015-06-01
The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Gambacorta, A.; Barnet, C.; Sun, F.; Goldberg, M.
2009-12-01
We investigate the water vapor component of the greenhouse effect in the tropical region using data from the Atmospheric InfraRed Sounder (AIRS). Differently from previous studies who have relayed on the assumption of constant lapse rate and performed coarse layer or total column sensitivity analysis, we resort to AIRS high vertical resolution to measure the greenhouse effect sensitivity to water vapor along the vertical column. We employ a "partial radiative perturbation" methodology and discriminate between two different dynamic regimes, convective and non-convective. This analysis provides useful insights on the occurrence and strength of the water vapor greenhouse effect and its sensitivity to spatial variations of surface temperature. By comparison with the clear-sky computation conducted in previous works, we attempt to confine an estimate for the cloud contribution to the greenhouse effect. Our results compare well with the current literature, falling in the upper range of the existing global circulation model estimates. We value the results of this analysis as a useful reference to help discriminate among model simulations and improve our capability to make predictions about the future of our climate.
van Delft, Ivanka; Finkenauer, Catrin; Tybur, Joshua M; Lamers-Winkelman, Francien
2016-06-01
Nonoffending mothers of sexually abused children often exhibit high levels of posttraumatic stress (PTS) symptoms. Emerging evidence suggests that trait-like individual differences in sensitivity to disgust play a role in the development of PTS symptoms. One such individual difference, disgust sensitivity, has not been examined as far as we are aware among victims of secondary traumatic stress. The current study examined associations between disgust sensitivity and PTS symptoms among mothers of sexually abused children (N = 72). Mothers completed the Impact of Event Scale-Revised and the Three Domain Disgust Scale (Tybur, Lieberman, & Griskevicius, 2009). More than one third of mothers scored above a suggested cutoff (mean score = 1.5) for high levels of PTS symptoms. Hierarchical linear regression analysis results indicated that sexual disgust sensitivity (β = .39, p = .002) was associated with PTS symptoms (R(2) = .18). An interaction analysis showed that sexual disgust sensitivity was associated with maternal PTS symptoms only when the perpetrator was not biologically related to the child (β = -.32, p = .047; R(2) = .28). Our findings suggested that sexual disgust sensitivity may be a risk factor for developing PTS symptoms among mothers of sexually abused children. Copyright © 2016 International Society for Traumatic Stress Studies.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Validation of a Brazilian version of the moral sensitivity questionnaire.
Dalla Nora, Carlise R; Zoboli, Elma Lcp; Vieira, Margarida M
2017-01-01
Moral sensitivity has been identified as a foundational component of ethical action. Diminished or absent moral sensitivity can result in deficient care. In this context, assessing moral sensitivity is imperative for designing interventions to facilitate ethical practice and ensure that nurses make appropriate decisions. The main purpose of this study was to validate a scale for examining the moral sensitivity of Brazilian nurses. A pre-existing scale, the Moral Sensitivity Questionnaire, which was developed by Lützén, was used after the deletion of three items. The reliability and validity of the scale were examined using Cronbach's alpha and factor analysis, respectively. Participants and research context: Overall, 316 nurses from Rio Grande do Sul, Brazil, participated in the study. Ethical considerations: This study was approved by the Ethics Committee of Research of the Nursing School of the University of São Paulo. The Moral Sensitivity Questionnaire contained 27 items that were distributed across four dimensions: interpersonal orientation, professional knowledge, moral conflict and moral meaning. The questionnaire accounted for 55.8% of the total variance, with Cronbach's alpha of 0.82. The mean score for moral sensitivity was 4.45 (out of 7). The results of this study were compared with studies from other countries to examine the structure and implications of the moral sensitivity of nurses in Brazil. The Moral Sensitivity Questionnaire is an appropriate tool for examining the moral sensitivity of Brazilian nurses.
Stability and sensitivity of ABR flow control protocols
NASA Astrophysics Data System (ADS)
Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong
1998-10-01
This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An
2016-03-01
A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.
A new sensitivity analysis for structural optimization of composite rotor blades
NASA Technical Reports Server (NTRS)
Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An
1993-01-01
This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon
2018-05-18
We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.
NASA Astrophysics Data System (ADS)
Králik, Juraj
2017-07-01
The paper presents the probabilistic and sensitivity analysis of the efficiency of the damping devices cover of nuclear power plant under impact of the container of nuclear fuel of type TK C30 drop. The finite element idealization of nuclear power plant structure is used in space. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall. The experimental results of the shock-damper basic element behavior under impact loads are presented. The Newmark integration method is used for solution of the dynamic equations. The sensitivity and probabilistic analysis of damping devices was realized in the AntHILL and ANSYS software.
Diagnostic features of Alzheimer's disease extracted from PET sinograms
NASA Astrophysics Data System (ADS)
Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.
2002-01-01
Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.
Receiver operating characteristic analysis of age-related changes in lineup performance.
Humphries, Joyce E; Flowe, Heather D
2015-04-01
In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
An easily implemented static condensation method for structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
Factor structure and construct validity of the Anxiety Sensitivity Index among island Puerto Ricans.
Cintrón, Jennifer A; Carter, Michele M; Suchday, Sonia; Sbrocco, Tracy; Gray, James
2005-01-01
The factor structure and convergent and discriminant validity of the Anxiety Sensitivity Index (ASI) were examined among a sample of 275 island Puerto Ricans. Results from a confirmatory factor analysis (CFA) comparing our data to factor solutions commonly reported as representative of European American and Spanish populations indicated a poor fit. A subsequent exploratory factor analysis (EFA) indicated that a two-factor solution (Factor 1, Anxiety Sensitivity; Factor 2, Emotional Concerns) provided the best fit. Correlations between the ASI and anxiety measures were moderately high providing evidence of convergent validity, while correlations between the ASI and BDI were significantly lower providing evidence of discriminant validity. Scores on all measures were positively correlated with acculturation, suggesting that those who ascribe to more traditional Hispanic culture report elevated anxiety.
Meta-analysis for explaining the variance in public transport demand elasticities in Europe
DOT National Transportation Integrated Search
1998-01-01
Results from past studies on transport demand elasticities show a large variance. This paper assesses key factors that influence the sensitivity of public transport users to transport costs in Europe, by carrying out a comparative analysis of the dif...
MOVES regional level sensitivity analysis
DOT National Transportation Integrated Search
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold
2013-10-01
The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analysis of Transition-Sensitized Turbulent Transport Equations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,
2005-01-01
The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.
Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can
2016-09-01
There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with clopidogrel plus ASA for up to 1 year for patients with NSTEACS is cost effective in the local context of China from a public payers' perspective. Sanofi China.
Alsaqa'aby, Mai F; Vaidya, Varun; Khreis, Noura; Khairallah, Thamer Al; Al-Jedai, Ahmed H
2017-01-01
Promising clinical and humanistic outcomes are associated with the use of new oral agents in the treatment of relapsing-remitting multiple sclerosis (RRMS). This is the first cost-effectiveness study comparing these medications in Saudi Arabia. We aimed to compare the cost-effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and interferon (IFN)-b1a products (Avonex and Rebif) as first-line therapies in the treatment of patients with RRMS from a Saudi payer perspective. Cohort Simulation Model (Markov Model). Tertiary care hospital. A hypothetical cohort of 1000 RRMS Saudi patients was assumed to enter a Markov model model with a time horizon of 20 years and an annual cycle length. The model was developed based on an expanded disability status scale (EDSS) to evaluate the cost-effectiveness of the five disease-modifying drugs (DMDs) from a healthcare system perspective. Data on EDSS progression and relapse rates were obtained from the literature; cost data were obtained from King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia. Results were expressed as incremental cost-effectiveness ratios (ICERs) and net monetary benefits (NMB) in Saudi Riyals and converted to equivalent $US. The base-case willingness-to-pay (WTP) threshold was assumed to be $100000 (SAR375000). One-way sensitivity analysis and probabilistic sensitivity analysis were conducted to test the robustness of the model. ICERs and NMB. The base-case analysis results showed Rebif as the optimal therapy at a WTP threshold of $100000. Avonex had the lowest ICER value of $337282/QALY when compared to Rebif. One-way sensitivity analysis demonstrated that the results were sensitive to utility weights of health state three and four and the cost of Rebif. None of the DMDs were found to be cost-effective in the treatment of RRMS at a WTP threshold of $100000 in this analysis. The DMDs would only be cost-effective at a WTP above $300000. The current analysis did not reflect the Saudi population preference in valuation of health states and did not consider the societal perspective in terms of cost.
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424
AKAP150-mediated TRPV1 sensitization is disrupted by calcium/calmodulin.
Chaudhury, Sraboni; Bal, Manjot; Belugin, Sergei; Shapiro, Mark S; Jeske, Nathaniel A
2011-05-14
The transient receptor potential vanilloid type1 (TRPV1) is expressed in nociceptive sensory neurons and is sensitive to phosphorylation. A-Kinase Anchoring Protein 79/150 (AKAP150) mediates phosphorylation of TRPV1 by Protein Kinases A and C, modulating channel activity. However, few studies have focused on the regulatory mechanisms that control AKAP150 association with TRPV1. In the present study, we identify a role for calcium/calmodulin in controlling AKAP150 association with, and sensitization of, TRPV1. In trigeminal neurons, intracellular accumulation of calcium reduced AKAP150 association with TRPV1 in a manner sensitive to calmodulin antagonism. This was also observed in transfected Chinese hamster ovary (CHO) cells, providing a model for conducting molecular analysis of the association. In CHO cells, the deletion of the C-terminal calmodulin-binding site of TRPV1 resulted in greater association with AKAP150, and increased channel activity. Furthermore, the co-expression of wild-type calmodulin in CHOs significantly reduced TRPV1 association with AKAP150, as evidenced by total internal reflective fluorescence-fluorescence resonance energy transfer (TIRF-FRET) analysis and electrophysiology. Finally, dominant-negative calmodulin co-expression increased TRPV1 association with AKAP150 and increased basal and PKA-sensitized channel activity. the results from these studies indicate that calcium/calmodulin interferes with the association of AKAP150 with TRPV1, potentially extending resensitization of the channel.
A Pro-active Real-time Forecasting and Decision Support System for Daily Management of Marine Works
NASA Astrophysics Data System (ADS)
Bollen, Mark; Leyssen, Gert; Smets, Steven; De Wachter, Tom
2016-04-01
Marine Works involving turbidity generating activities (eg. dredging, dredge spoil placement) can generate environmental stress in and around a project area in the form of sediment plumes causing light reduction and sedimentation. If these works are situated near sensitive habitats like sea-grass beds, coral reefs or sensitive human activities eg. aquaculture farms or water intakes, or if contaminants are present in the water soil environmental scrutiny is advised. Environmental Regulations can impose limitations to these activities in the form of turbidity thresholds, spill budgets, contaminant levels. Breaching environmental regulations can result in increased monitoring, adaptation of the works planning and production rates and ultimately in a (temporary) stop of activities all of which entail time and cost impacts for a contractor and/or client. Sediment plume behaviour is governed by the dredging process, soil properties and ambient conditions (currents, water depth) and can be modelled. Usually this is done during the preparatory EIA phase of a project, for estimation of environmental impact based on climatic scenarios. An operational forecasting tool is developed to adapt marine work schedules to the real-time circumstances and thus evade exceedance of critical threshold levels at sensitive areas. The forecasting system is based on a Python-based workflow manager with a MySQL database and a Django frontend web tool for user interaction and visualisation of the model results. The core consists of a numerical hydrodynamic model with sediment transport module (Mike21 from DHI). This model is driven by space and time varying wind fields and wave boundary conditions, and turbidity inputs (suspended sediment source terms) based on marine works production rates and soil properties. The resulting threshold analysis allows the operator to indicate potential impact at the sensitive areas and instigate an adaption of the marine work schedule if needed. In order to use this toolbox in real-time situations and facilitate forecasting of impacts of planned dredge works, the following operational online functionalities are implemented: • Automated fetch and preparation of the input data, including 7 day forecast wind and wave fields and real-time measurements, and user defined the turbidity inputs based on scheduled marine works. • Generate automated forecasts and running user configurable scenarios at the same time in parallel. • Export and convert the model results, time series and maps, into a standardized format (netcdf). • Automatic analysis and processing of model results, including the calculation of indicator turbidity values and the exceedance analysis of threshold levels at the different sensitive areas. Data assimilation with the real time on site turbidity measurements is implemented in this threshold analysis. • Pre-programmed generation of animated sediment plumes, specific charts and pdf reports to allow a rapid interpretation of the model results by the operators and facilitating decision making in the operational planning. The performed marine works, resulting from the marine work schedule proposed by the forecasting system, are evaluated by a threshold analysis on the validated turbidity measurements on the sensitive sites. This machine learning loop allows a check of the system in order to evaluate forecast and model uncertainties.
Factors influencing the robustness of P-value measurements in CT texture prognosis studies
NASA Astrophysics Data System (ADS)
McQuaid, Sarah; Scuffham, James; Alobaidli, Sheaka; Prakash, Vineet; Ezhil, Veni; Nisbet, Andrew; South, Christopher; Evans, Philip
2017-07-01
Several studies have recently reported on the value of CT texture analysis in predicting survival, although the topic remains controversial, with further validation needed in order to consolidate the evidence base. The aim of this study was to investigate the effect of varying the input parameters in the Kaplan-Meier analysis, to determine whether the resulting P-value can be considered to be a robust indicator of the parameter’s prognostic potential. A retrospective analysis of the CT-based normalised entropy of 51 patients with lung cancer was performed and overall survival data for these patients were collected. A normalised entropy cut-off was chosen to split the patient cohort into two groups and log-rank testing was performed to assess the survival difference of the two groups. This was repeated for varying normalised entropy cut-offs and varying follow-up periods. Our findings were also compared with previously published results to assess robustness of this parameter in a multi-centre patient cohort. The P-value was found to be highly sensitive to the choice of cut-off value, with small changes in cut-off producing substantial changes in P. The P-value was also sensitive to follow-up period, with particularly noisy results at short follow-up periods. Using matched conditions to previously published results, a P-value of 0.162 was obtained. Survival analysis results can be highly sensitive to the choice in texture cut-off value in dichotomising patients, which should be taken into account when performing such studies to avoid reporting false positive results. Short follow-up periods also produce unstable results and should therefore be avoided to ensure the results produced are reproducible. Previously published findings that indicated the prognostic value of normalised entropy were not replicated here, but further studies with larger patient numbers would be required to determine the cause of the different outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkes, Olga, E-mail: o.parkes@ucl.ac.uk; Lettieri, Paola, E-mail: p.lettieri@ucl.ac.uk; Bogle, I. David L.
Highlights: • Application of LCA in planning integrated waste management systems. • Environmental valuation of 3 legacy scenarios for the Olympic Park. • Hot-spot analysis highlights the importance of energy and materials recovery. • Most environmental savings are achieved through materials recycling. • Sensitivity analysis shows importance of waste composition and recycling rates. - Abstract: This paper presents the results of the life cycle assessment (LCA) of 10 integrated waste management systems (IWMSs) for 3 potential post-event site design scenarios of the London Olympic Park. The aim of the LCA study is to evaluate direct and indirect emissions resulting frommore » various treatment options of municipal solid waste (MSW) annually generated on site together with avoided emissions resulting from energy, materials and nutrients recovery. IWMSs are modelled using GaBi v6.0 Product Sustainability software and results are presented based on the CML (v.Nov-10) characterisation method. The results show that IWMSs with advanced thermal treatment (ATT) and incineration with energy recovery have the lowest Global Warming Potential (GWP) than IWMSs where landfill is the primary waste treatment process. This is due to higher direct emissions and lower avoided emissions from the landfill process compared to the emissions from the thermal treatment processes. LCA results demonstrate that significant environmental savings are achieved through substitution of virgin materials with recycled ones. The results of the sensitivity analysis carried out for IWMS 1 shows that increasing recycling rate by 5%, 10% and 15% compared to the baseline scenario can reduce GWP by 8%, 17% and 25% respectively. Sensitivity analysis also shows how changes in waste composition affect the overall result of the system. The outcomes of such assessments provide decision-makers with fundamental information regarding the environmental impacts of different waste treatment options necessary for sustainable waste management planning.« less
Puttarajappa, Chethan; Wijkstrom, Martin; Ganoza, Armando; Lopez, Roberto; Tevar, Amit
2018-01-01
Background Recent studies have reported a significant decrease in wound problems and hospital stay in obese patients undergoing renal transplantation by robotic-assisted minimally invasive techniques with no difference in graft function. Objective Due to the lack of cost-benefit studies on the use of robotic-assisted renal transplantation versus open surgical procedure, the primary aim of our study is to develop a Markov model to analyze the cost-benefit of robotic surgery versus open traditional surgery in obese patients in need of a renal transplant. Methods Electronic searches will be conducted to identify studies comparing open renal transplantation versus robotic-assisted renal transplantation. Costs associated with the two surgical techniques will incorporate the expenses of the resources used for the operations. A decision analysis model will be developed to simulate a randomized controlled trial comparing three interventional arms: (1) continuation of renal replacement therapy for patients who are considered non-suitable candidates for renal transplantation due to obesity, (2) transplant recipients undergoing open transplant surgery, and (3) transplant patients undergoing robotic-assisted renal transplantation. TreeAge Pro 2017 R1 TreeAge Software, Williamstown, MA, USA) will be used to create a Markov model and microsimulation will be used to compare costs and benefits for the two competing surgical interventions. Results The model will simulate a randomized controlled trial of adult obese patients affected by end-stage renal disease undergoing renal transplantation. The absorbing state of the model will be patients' death from any cause. By choosing death as the absorbing state, we will be able simulate the population of renal transplant recipients from the day of their randomization to transplant surgery or continuation on renal replacement therapy to their death and perform sensitivity analysis around patients' age at the time of randomization to determine if age is a critical variable for cost-benefit analysis or cost-effectiveness analysis comparing renal replacement therapy, robotic-assisted surgery or open renal transplant surgery. After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. Conclusions After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. PMID:29519780
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakamoto, Yuri; Uemura, Kohei; Ikuta, Takashi; Maehashi, Kenzo
2018-04-01
We have succeeded in fabricating a hydrogen gas sensor based on palladium-modified graphene field-effect transistors (FETs). The negative-voltage shift in the transfer characteristics was observed with exposure to hydrogen gas, which was explained by the change in work function. The hydrogen concentration dependence of the voltage shift was investigated using graphene FETs with palladium deposited by three different evaporation processes. The results indicate that the hydrogen detection sensitivity of the palladium-modified graphene FETs is strongly dependent on the palladium configuration. Therefore, the palladium-modified graphene FET is a candidate for breath analysis.
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...
2017-01-24
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
Resonance ionization for analytical spectroscopy
Hurst, George S.; Payne, Marvin G.; Wagner, Edward B.
1976-01-01
This invention relates to a method for the sensitive and selective analysis of an atomic or molecular component of a gas. According to this method, the desired neutral component is ionized by one or more resonance photon absorptions, and the resultant ions are measured in a sensitive counter. Numerous energy pathways are described for accomplishing the ionization including the use of one or two tunable pulsed dye lasers.
Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa
2017-06-27
In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.
Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.
2012-01-01
A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Cudahy, Patrick G.T; Schumacher, Samuel G.; Steingart, Karen R.; Pai, Madhukar; Denkinger, Claudia M.
2017-01-01
Only 25% of multidrug-resistant tuberculosis (MDR-TB) cases are currently diagnosed. Line probe assays (LPAs) enable rapid drug-susceptibility testing for rifampicin (RIF) and isoniazid (INH) resistance and Mycobacterium tuberculosis detection. Genotype MTBDRplusV1 was WHO-endorsed in 2008 but newer LPAs have since been developed. This systematic review evaluated three LPAs: Hain Genotype MTBDRplusV1, MTBDRplusV2 and Nipro NTM+MDRTB. Study quality was assessed with QUADAS-2. Bivariate random-effects meta-analyses were performed for direct and indirect testing. Results for RIF and INH resistance were compared to phenotypic and composite (incorporating sequencing) reference standards. M. tuberculosis detection results were compared to culture. 74 unique studies were included. For RIF resistance (21 225 samples), pooled sensitivity and specificity (with 95% confidence intervals) were 96.7% (95.6–97.5%) and 98.8% (98.2–99.2%). For INH resistance (20 954 samples), pooled sensitivity and specificity were 90.2% (88.2–91.9%) and 99.2% (98.7–99.5%). Results were similar for direct and indirect testing and across LPAs. Using a composite reference standard, specificity increased marginally. For M. tuberculosis detection (3451 samples), pooled sensitivity was 94% (89.4–99.4%) for smear-positive specimens and 44% (20.2–71.7%) for smear-negative specimens. In patients with pulmonary TB, LPAs have high sensitivity and specificity for RIF resistance and high specificity and good sensitivity for INH resistance. This meta-analysis provides evidence for policy and practice. PMID:28100546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, L.; Lanza, R.C.
1999-12-01
The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less
Pirro, Valentina; Hattab, Eyas M.; Cohen-Gadol, Aaron A.; Cooks, R. Graham
2016-01-01
Desorption electrospray ionization—mass spectrometry (DESI-MS) imaging was used to analyze unmodified human brain tissue sections from 39 subjects sequentially in the positive and negative ionization modes. Acquisition of both MS polarities allowed more complete analysis of the human brain tumor lipidome as some phospholipids ionize preferentially in the positive and others in the negative ion mode. Normal brain parenchyma, comprised of grey matter and white matter, was differentiated from glioma using positive and negative ion mode DESI-MS lipid profiles with the aid of principal component analysis along with linear discriminant analysis. Principal component–linear discriminant analyses of the positive mode lipid profiles was able to distinguish grey matter, white matter, and glioma with an average sensitivity of 93.2% and specificity of 96.6%, while the negative mode lipid profiles had an average sensitivity of 94.1% and specificity of 97.4%. The positive and negative mode lipid profiles provided complementary information. Principal component–linear discriminant analysis of the combined positive and negative mode lipid profiles, via data fusion, resulted in approximately the same average sensitivity (94.7%) and specificity (97.6%) of the positive and negative modes when used individually. However, they complemented each other by improving the sensitivity and specificity of all classes (grey matter, white matter, and glioma) beyond 90% when used in combination. Further principal component analysis using the fused data resulted in the subgrouping of glioma into two groups associated with grey and white matter, respectively, a separation not apparent in the principal component analysis scores plots of the separate positive and negative mode data. The interrelationship of tumor cell percentage and the lipid profiles is discussed, and how such a measure could be used to measure residual tumor at surgical margins. PMID:27658243
Kedia, Saurabh; Sharma, Raju; Sreenivas, Vishnubhatla; Madhusudhan, Kumble Seetharama; Sharma, Vishal; Bopanna, Sawan; Pratap Mouli, Venigalla; Dhingra, Rajan; Yadav, Dawesh Prakash; Makharia, Govind; Ahuja, Vineet
2017-04-01
Abdominal computed tomography (CT) can noninvasively image the entire gastrointestinal tract and assess extraintestinal features that are important in differentiating Crohn's disease (CD) and intestinal tuberculosis (ITB). The present meta-analysis pooled the results of all studies on the role of CT abdomen in differentiating between CD and ITB. We searched PubMed and Embase for all publications in English that analyzed the features differentiating between CD and ITB on abdominal CT. The features included comb sign, necrotic lymph nodes, asymmetric bowel wall thickening, skip lesions, fibrofatty proliferation, mural stratification, ileocaecal area, long segment, and left colonic involvements. Sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratio (DOR) were calculated for all the features. Symmetric receiver operating characteristic curve was plotted for features present in >3 studies. Heterogeneity and publication bias was assessed and sensitivity analysis was performed by excluding studies that compared features on conventional abdominal CT instead of CT enterography (CTE). We included 6 studies (4 CTE, 1 conventional abdominal CT, and 1 CTE+conventional abdominal CT) involving 417 and 195 patients with CD and ITB, respectively. Necrotic lymph nodes had the highest diagnostic accuracy (sensitivity, 23%; specificity, 100%; DOR, 30.2) for ITB diagnosis, and comb sign (sensitivity, 82%; specificity, 81%; DOR, 21.5) followed by skip lesions (sensitivity, 86%; specificity, 74%; DOR, 16.5) had the highest diagnostic accuracy for CD diagnosis. On sensitivity analysis, the diagnostic accuracy of other features excluding asymmetric bowel wall thickening remained similar. Necrotic lymph nodes and comb sign on abdominal CT had the best diagnostic accuracy in differentiating CD and ITB.
Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?
Ershadi, Saba; Shayanfar, Ali
2018-03-22
The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.
2016-01-01
Introduction Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. Goal To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. Method A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. Results A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests’ use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests’ sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. Conclusion The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I2). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis. PMID:27055272
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
NASA Astrophysics Data System (ADS)
Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah
2015-04-01
Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.
Barrett, K. E.; Ellis, K. D.; Glass, C. R.; ...
2015-12-01
The goal of the Accident Tolerant Fuel (ATF) program is to develop the next generation of Light Water Reactor (LWR) fuels with improved performance, reliability, and safety characteristics during normal operations and accident conditions and with reduced waste generation. An irradiation test series has been defined to assess the performance of proposed ATF concepts under normal LWR operating conditions. The Phase I ATF irradiation test series is planned to be performed as a series of drop-in capsule tests to be irradiated in the Advanced Test Reactor (ATR) operated by the Idaho National Laboratory (INL). Design, analysis, and fabrication processes formore » ATR drop-in capsule experiment preparation are presented in this paper to demonstrate the importance of special design considerations, parameter sensitivity analysis, and precise fabrication and inspection techniques for figure innovative materials used in ATF experiment assemblies. A Taylor Series Method sensitivity analysis approach was used to identify the most critical variables in cladding and rodlet stress, temperature, and pressure calculations for design analyses. The results showed that internal rodlet pressure calculations are most sensitive to the fission gas release rate uncertainty while temperature calculations are most sensitive to cladding I.D. and O.D. dimensional uncertainty. The analysis showed that stress calculations are most sensitive to rodlet internal pressure uncertainties, however the results also indicated that the inside radius, outside radius, and internal pressure were all magnified as they propagate through the stress equation. This study demonstrates the importance for ATF concept development teams to provide the fabricators as much information as possible about the material properties and behavior observed in prototype testing, mock-up fabrication and assembly, and chemical and mechanical testing of the materials that may have been performed in the concept development phase. Special handling, machining, welding, and inspection of materials, if known, should also be communicated to the experiment fabrication and inspection team.« less
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.
Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael
2012-07-01
To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Haley, Nicholas J.; Siepker, Chris; Walter, W. David; Thomsen, Bruce V.; Greenlee, Justin J.; Lehmkuhl, Aaron D.; Richt, Jürgen a.
2016-01-01
Chronic wasting disease (CWD), a transmissible spongiform encephalopathy of cervids, was first documented nearly 50 years ago in Colorado and Wyoming and has since spread to cervids in 23 states, two Canadian provinces, and the Republic of Korea. The expansion of this disease makes the development of sensitive diagnostic assays and antemortem sampling techniques crucial for the mitigation of its spread; this is especially true in cases of relocation/reintroduction of farmed or free-ranging deer and elk or surveillance studies of private or protected herds, where depopulation is contraindicated. This study sought to evaluate the sensitivity of the real-time quaking-induced conversion (RT-QuIC) assay by using recto-anal mucosa-associated lymphoid tissue (RAMALT) biopsy specimens and nasal brush samples collected antemortem from farmed white-tailed deer (n = 409). Antemortem findings were then compared to results from ante- and postmortem samples (RAMALT, brainstem, and medial retropharyngeal lymph nodes) evaluated by using the current gold standard in vitro assay, immunohistochemistry (IHC) analysis. We hypothesized that the sensitivity of RT-QuIC would be comparable to IHC analysis in antemortem tissues and would correlate with both the genotype and the stage of clinical disease. Our results showed that RAMALT testing by RT-QuIC assay had the highest sensitivity (69.8%) compared to that of postmortem testing, with a specificity of >93.9%. These data suggest that RT-QuIC, like IHC analysis, is an effective assay for detection of PrPCWD in rectal biopsy specimens and other antemortem samples and, with further research to identify more sensitive tissues, bodily fluids, or experimental conditions, has potential for large-scale and rapid automated testing for CWD diagnosis.
Co-morbid pain and opioid addiction: Long term effect of opioid maintenance on acute pain
Wachholtz, Amy; Gonzalez, Gerardo
2014-01-01
Introduction Medication assisted treatment for opioid dependence alters the pain experience. This study will evaluate changes pain sensitivity and tolerance with opioid treatments; and duration of this effect after treatment cessation. Method 120 individuals with chronic pain were recruited in 4 groups (n=30): 1-methadone for opioid addiction; 2-buprenorphine for opioid addiction; 3-history of opioid maintenance treatment for opioid addiction but with prolonged abstinence (M=121 weeks; SD=23.3); and 4-opioid naïve controls. Participants completed a psychological assessment and a cold water task including, time to first pain (sensitivity) and time to stopping the pain task (tolerance). Data analysis used survival analyses. Results A Kaplan-Meier-Cox survival analysis showed group differences for both pain sensitivity (Log rank=15.50; p<.001) and tolerance (Log rank=20.11; p<.001). Current or historical use of opioid maintenance resulted in differing pain sensitivity compared to opioid naïve (p’s<.01). However, tolerance to pain was better among those with a history of opioid maintenance compared to active methadone patients (p<.05), with the highest tolerance found among opioid naïve control group participants (p’s<.001). Correlations within the prolonged abstinent group indicated pain tolerance was significantly improved as length of opioid abstinence increased (R=.37; p<.05); but duration of abstinence did not alter sensitivity (ns). Conclusion Among individuals with a history of prolonged opioid maintenance, there appears to be long-term differences in pain sensitivity that do not resolve with discontinuation of opioid maintenance. Although pain sensitivity does not change, pain tolerance does improve after opioid maintenance cessation. Implications for treating co-morbid opioid addiction and pain (acute and chronic) are discussed. PMID:25456326
Plumb, Andrew A; Halligan, Steve; Pendsé, Douglas A; Taylor, Stuart A; Mallett, Susan
2014-05-01
CT colonography (CTC) is recommended after positive faecal occult blood testing (FOBt) when colonoscopy is incomplete or infeasible. We aimed to estimate the sensitivity and specificity of CTC for colorectal cancer and adenomatous polyps following positive FOBt via systematic review. The MEDLINE, EMBASE, AMED and Cochrane Library databases were searched for CTC studies reporting sensitivity and specificity for colorectal cancer and adenomatous polyps. Included subjects had tested FOBt-positive by guaiac or immunochemical methods. Per-patient detection rates were summarized via forest plots. Meta-analysis of sensitivity and specificity was conducted using a bivariate random effects model and the average operating point calculated. Of 538 articles considered, 5 met inclusion criteria, describing results from 622 patients. Research study quality was good. CTC had a high per-patient average sensitivity of 88.8 % (95 % CI 83.6 to 92.5 %) for ≥6 mm adenomas or colorectal cancer, with low between-study heterogeneity. Specificity was both more heterogeneous and lower, at an average of 75.4 % (95 % CI 58.6 to 86.8 %). Few studies have investigated CTC in FOBt-positive individuals. CTC is sensitive at a ≥6 mm threshold but specificity is lower and variable. Despite the limited data, these results suggest that CTC may adequately substitute for colonoscopy when the latter is undesirable. • FOBt is the most common mass screening test for colorectal cancer. • Few studies evaluate CT colonography after positive FOBt. • CTC is approximately 89 % sensitive for ≥6 mm adenomas/cancer in this setting. • Specificity is lower, at approximately 75 %, and more variable. • CT colonography is a good alternative when colonoscopy is undesirable.
Quality theory paper writing for medical examinations.
Shukla, Samarth; Acharya, Sourya; Acharya, Neema; Shrivastava, Tripti; Kale, Anita
2014-04-01
Aim & Objectives: Developing a tactful paper writing skill, through delivery and depiction of the necessary expressions required for in standard or superior essay writing. Understanding relevance and tact of theoretical expression in exam paper writing Learning Indices of standard or quality theory/essay answer (SAQ/LAQ). Applying knowledge and skill gained through these theory writing exercises and assignments to achieve high or better scores in examinations. The study subjects were divided into two groups- Group A (17 students) and Group B students (10students). The students were selected from II M.B.B.S 4(th) term. Students of Group A were sensitized on how to write a theory paper and went through 4 phases namely pre-sensitization test, sensitization (imparting them with skills of good theory paper writing through home assignments and deliberations/ guidance), post-sensitization test and Evaluation. Students of Group A (17 students) undertook theory tests (twice, i.e. before and after sensitization) and Students of Group B (10 students) who were not sensitized and took the theory test with post sensitized Group A students (random 10 students). Both groups were given general pathology as the test syllabus, taught to both groups in didactic lectures during the last 6 months. The results of pre and Post-sensitization tests from both groups were analyzed. Intra group comparisons (pre sensitized Group A with Post sensitized Group A) and inter group comparisons (Non-sensitized group B with Sensitized Group A) were made. Significant results were found between results of pre and Post-sensitization tests in Group A (intra group analysis) and inter group (Group A and B) Post-sensitization tests, as there was remarkable improvement in student theory paper writing skills post sensitizing the students of Group A. Medical students should be mandatorily guided and exposed to the nuances and tact of writing the theory paper for their examinations, as it definitely gives them better understanding of presentations ultimately improving their score in the theory exams.
Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.
2014-01-01
Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643
Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br
2015-01-15
Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor
Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding
2018-01-01
A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor’s working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0–6000 Hz for vibration sensing and 0–100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration. PMID:29494544
Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor.
Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding
2018-03-01
A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor's working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0-6000 Hz for vibration sensing and 0-100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration.
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
Cost effectiveness analysis of screening for sight threatening diabetic eye disease
James, Marilyn; Turner, David A; Broadbent, Deborah M; Vora, Jiten; Harding, Simon P
2000-01-01
Objective To measure the cost effectiveness of systematic photographic screening for sight threatening diabetic eye disease compared with existing practice. Design Cost effectiveness analysis Setting Liverpool. Subjects A target population of 5000 diabetic patients invited for screening. Main outcome measures Cost effectiveness (cost per true positive) of systematic and opportunistic programmes; incremental cost effectiveness of replacing opportunistic with systematic screening. Results Baseline prevalence of sight threatening eye disease was 14.1%. The cost effectiveness of the systematic programme was £209 (sensitivity 89%, specificity 86%, compliance 80%, annual cost £104 996) and of the opportunistic programme was £289 (combined sensitivity 63%, specificity 92%, compliance 78%, annual cost £99 981). The incremental cost effectiveness of completely replacing the opportunistic programme was £32. Absolute values of cost effectiveness were highly sensitive to varying prevalence, sensitivity and specificity, compliance, and programme size. Conclusion Replacing existing programmes with systematic screening for diabetic eye disease is justified. PMID:10856062
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Perturbation analysis for patch occupancy dynamics
Martin, Julien; Nichols, James D.; McIntyre, Carol L.; Ferraz, Goncalo; Hines, James E.
2009-01-01
Perturbation analysis is a powerful tool to study population and community dynamics. This article describes expressions for sensitivity metrics reflecting changes in equilibrium occupancy resulting from small changes in the vital rates of patch occupancy dynamics (i.e., probabilities of local patch colonization and extinction). We illustrate our approach with a case study of occupancy dynamics of Golden Eagle (Aquila chrysaetos) nesting territories. Examination of the hypothesis of system equilibrium suggests that the system satisfies equilibrium conditions. Estimates of vital rates obtained using patch occupancy models are used to estimate equilibrium patch occupancy of eagles. We then compute estimates of sensitivity metrics and discuss their implications for eagle population ecology and management. Finally, we discuss the intuition underlying our sensitivity metrics and then provide examples of ecological questions that can be addressed using perturbation analyses. For instance, the sensitivity metrics lead to predictions about the relative importance of local colonization and local extinction probabilities in influencing equilibrium occupancy for rare and common species.
NASA Astrophysics Data System (ADS)
Ayyaswamy, Arivarasan; Ganapathy, Sasikala; Alsalme, Ali; Alghamdi, Abdulaziz; Ramasamy, Jayavel
2015-12-01
Zinc and sulfur alloyed CdTe quantum dots (QDs) sensitized TiO2 photoelectrodes have been fabricated for quantum dots sensitized solar cells. Alloyed CdTe QDs were prepared in aqueous phase using mercaptosuccinic acid (MSA) as a capping agent. The influence of co-doping on the structural property of CdTe QDs was studied by XRD analysis. The enhanced optical absorption of alloyed CdTe QDs was studied using UV-vis absorption and fluorescence emission spectra. The capping of MSA molecules over CdTe QDs was confirmed by the FTIR and XPS analyses. Thermogravimetric analysis confirms that the prepared QDs were thermally stable up to 600 °C. The photovoltaic performance of alloyed CdTe QDs sensitized TiO2 photoelectrodes were studied using J-V characteristics under the illumination of light with 1 Sun intensity. These results show the highest photo conversion efficiency of η = 1.21%-5% Zn & S alloyed CdTe QDs.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Nafisi Moghadam, Reza; Amlelshahbaz, Amir Pasha; Namiranian, Nasim; Sobhan-Ardekani, Mohammad; Emami-Meybodi, Mahmood; Dehghan, Ali; Rahmanian, Masoud; Razavi-Ratki, Seid Kazem
2017-12-28
Objective: Ultrasonography (US) and parathyroid scintigraphy (PS) with 99mTc-MIBI are common methods for preoperative localization of parathyroid adenomas but there discrepancies exist with regard to diagnostic accuracy. The aim of the study was to compare PS and US for localization of parathyroid adenoma with a systematic review and meta-analysis of the literature. Methods: Pub Med, Scopus (EMbase), Web of Science and the reference lists of all included studies were searched up to 1st January 2016. The search strategy was according PICO characteristics. Heterogeneity between the studies was accounted by P < 0.1. Point estimates were pooled estimate of sensitivity, specificity and positive predictive value of SPECT and ultrasonography with 99% confidence intervals (CIs) by pooling available data. Data analysis was performed using Meta-DiSc software (version 1.4). Results: Among 188 studies and after deletion of duplicated studies (75), a total of 113 titles and abstracts were studied. From these, 12 studies were selected. The meta-analysis determined a pooled sensitivity for scintigraphy of 83% [99% confidence interval (CI) 96.358 -97.412] and for ultra-sonography of 80% [99% confidence interval (CI) 76-83]. Similar results for specificity were also obtained for both approache. Conclusion: According this meta- analysis, there were no significant differences between the two methods in terms of sensitivity and specificity. There were overlaps in 99% confidence intervals. Also features of the two methods are similar. Creative Commons Attribution License
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
2014-01-01
Background The introduction of cisplatin has improved the long-term survival rate in osteosarcoma patients. However, some patients are intrinsically resistant to cisplatin. This study reported that the activation of Notch1 is positively correlated with cisplatin sensitivity, evidenced by both clinical and in vitro data. Results In this study, a total 8 osteosarcoma specimens were enrolled and divided into two groups according to their cancer chemotherapeutic drugs sensitivity examination results. The relationship between Notch1 expression and cisplatin sensitivity of osteosarcoma patients was detected by immunohistochemistry and semi-quantitative analysis. Subsequently, two typical osteosarcoma cell lines, Saos-2 and MG63, were selected to study the changes of cisplatin sensitivity by up-regulating (NICD1 plasmid transfeciton) or decreasing (gamma-secretase complex inhibitor DAPT) the activation state of Notch1 signaling pathway. Our results showed a significant correlation between the expression of Notch1 and cisplatin sensitivity in patient specimens. In vitro, Saos-2 with higher expression of Notch1 had significantly better cisplatin sensitivity than MG63 whose Notch1 level was relatively lower. By targeting regulation in vitro, the cisplatin sensitivity of Saos-2 and MG63 had significantly increased after the activation of Notch1 signaling pathway, and vice versa. Further mechanism investigation revealed that activation/inhibition of Notch1 sensitized/desensitized cisplatin-induced apoptosis, which probably depended on the changes in the activity of Caspase family, including Caspase 3, Caspase 8 and Caspase 9 in these cells. Conclusions Our data clearly demonstrated that Notch1 is critical for cisplatin sensitivity in osteosarcoma. It can be used as a molecular marker and regulator for cisplatin sensitivity in osteosarcoma patients. PMID:24894297
Huang, Jiacong; Gao, Junfeng; Yan, Renhua
2016-08-15
Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems. Copyright © 2016. Published by Elsevier B.V.
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
NASA Astrophysics Data System (ADS)
Nepal, S.
2016-12-01
The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Comparison of the efficiency control of mycotoxins by some optical immune biosensors
NASA Astrophysics Data System (ADS)
Slyshyk, N. F.; Starodub, N. F.
2013-11-01
It was compared the efficiency of patulin control at the application of such optical biosensors which were based on the surface plasmon resonance (SPR) and nano-porous silicon (sNPS). In last case the intensity of the immune reaction was registered by measuring level of chemiluminescence (ChL) or photocurrent of nPS. The sensitivity of this mycotoxin determination by first type of immune biosensor was 0.05-10 mg/L Approximately the same sensitivity as well as the overall time analysis were demonstrated by the immune biosensor based on the nPS too. Nevertheless, the last type of biosensor was simpler in technical aspect and the cost of analysis was cheapest. That is why, it was recommend the nPS based immune biosensor for wide screening application and SPR one for some additional control or verification of preliminary obtained results. In this article a special attention was given to condition of sample preparation for analysis, in particular, micotoxin extraction from potao and some juices. Moreover, it was compared the efficiency of the above mentioned immune biosensors with such traditional approach of mycotoxin determination as the ELISA-method. In the result of investigation and discussion of obtained data it was concluded that both type of the immune biosensors are able to fulfill modern practice demand in respect sensitivity, rapidity, simplicity and cheapness of analysis.
A fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot cavity
NASA Astrophysics Data System (ADS)
Liu, Guigen; Sheng, Qiwen; Resende Lisboa Piassetta, Geraldo; Hou, Weilin; Han, Ming
2016-05-01
A hot-wire fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot interferometer (FPI) has been proposed and demonstrated in this paper. The operation of the sensor is based on the convective heat loss to water from a heated silicon FPI attached to the cleaved enface of a piece of single-mode fiber. The flow-induced change in the temperature is demodulated by the spectral shifts of the reflection fringes. An analytical model based on the FPI theory and heat transfer analysis has been developed for performance analysis. Numerical simulations based on finite element analysis have been conducted. The analytical and numerical results agree with each other in predicting the behavior of the sensor. Experiments have also been carried to demonstrate the sensing principle and verify the theoretical analysis. Investigations suggest that the sensitivity at low flow rates are much larger than that at high flow rates and the sensitivity can be easily improved by increasing the heating laser power. Experimental results show that an average sensitivity of 52.4 nm/(m/s) for the flow speed range of 1.5 mm/s to 12 mm/s was obtained with a heating power of ~12 mW, suggesting a resolution of ~1 μm/s assuming a wavelength resolution of 0.05 pm.
System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO
NASA Technical Reports Server (NTRS)
Olds, John R.
1994-01-01
This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.
Sensitivity Equation Derivation for Transient Heat Transfer Problems
NASA Technical Reports Server (NTRS)
Hou, Gene; Chien, Ta-Cheng; Sheen, Jeenson
2004-01-01
The focus of the paper is on the derivation of sensitivity equations for transient heat transfer problems modeled by different discretization processes. Two examples will be used in this study to facilitate the discussion. The first example is a coupled, transient heat transfer problem that simulates the press molding process in fabrication of composite laminates. These state equations are discretized into standard h-version finite elements and solved by a multiple step, predictor-corrector scheme. The sensitivity analysis results based upon the direct and adjoint variable approaches will be presented. The second example is a nonlinear transient heat transfer problem solved by a p-version time-discontinuous Galerkin's Method. The resulting matrix equation of the state equation is simply in the form of Ax = b, representing a single step, time marching scheme. A direct differentiation approach will be used to compute the thermal sensitivities of a sample 2D problem.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
White, Christopher R H; Doherty, Dorota A; Cannon, Jeffrey W; Kohan, Rolland; Newnham, John P; Pennell, Craig E
2016-07-01
There is an increasing body of literature supporting universal umbilical cord blood gas analysis (UCBGA) into all maternity units. A significant impediment to UCBGA's introduction is the perceived expense of the introduction and associated ongoing costs. Consequently, this study set out to conduct the first cost-effectiveness analysis of introducing universal UCBGA. Analysis was based on 42,100 consecutive deliveries ≥23 weeks of gestation at a single tertiary obstetric unit. Within 4 years of UCBGA's introduction there was a 45% reduction in term special care nursery (SCN) admissions >2499 g. Incurred costs included initial and ongoing costs associated with universal UCBGA. Averted costs were based on local diagnosis-related grouping costs for reduction in term SCN admissions. Incremental cost-effectiveness ratio (ICER) and sensitivity analysis results were reported. Under the base-case scenario, the adoption of universal UCBGA was less costly and more effective than selective UCBGA over 4 years and resulted in saving of AU$641,532 while adverting 376 SCN admissions. Sensitivity analysis showed that UCBGA was cost-effective in 51.8%, 83.3%, 99.6% and 100% of simulations in years 1, 2, 3 and 4. These conclusions were not sensitive to wide, clinically possible variations in parameter values for neonatal intensive care unit and SCN admissions, magnitude of averted SCN admissions, cumulative delivery numbers, and SCN admission costs. Universal UCBGA is associated with significant initial and ongoing costs; however, potential averted costs (due to reduced SCN admissions) exceed incurred costs in most scenarios.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
Ford, W; King, K; Williams, M; Williams, J; Fausey, N
2015-07-01
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on dissolved reactive phosphorus (DRP) loadings from agricultural fields. However, tools that simulate both surface and subsurface DRP pathways are limited and have not been robustly evaluated in tile-drained landscapes. The objectives of this study were to test the ability of the Agricultural Policy/Environmental eXtender (APEX), a widely used field-scale model, to simulate surface and tile P loadings over management, hydrologic, biologic, tile, and soil gradients and to better understand the behavior of P delivery at the edge-of-field in tile-drained midwestern landscapes. To do this, a global, variance-based sensitivity analysis was performed, and model outputs were compared with measured P loads obtained from 14 surface and subsurface edge-of-field sites across central and northwestern Ohio. Results of the sensitivity analysis showed that response variables for DRP were highly sensitive to coupled interactions between presumed important parameters, suggesting nonlinearity of DRP delivery at the edge-of-field. Comparison of model results to edge-of-field data showcased the ability of APEX to simulate surface and subsurface runoff and the associated DRP loading at monthly to annual timescales; however, some high DRP concentrations and fluxes were not reflected in the model, suggesting the presence of preferential flow. Results from this study provide new insights into baseline tile DRP loadings that exceed thresholds for algal proliferation. Further, negative feedbacks between surface and subsurface DRP delivery suggest caution is needed when implementing DRP-based best management practices designed for a specific flow pathway. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Improved Sensitivity for Molecular Detection of Bacterial and Candida Infections in Blood
Bacconi, Andrea; Richmond, Gregory S.; Baroldi, Michelle A.; Laffler, Thomas G.; Blyn, Lawrence B.; Carolan, Heather E.; Frinder, Mark R.; Toleno, Donna M.; Metzgar, David; Gutierrez, Jose R.; Massire, Christian; Rounds, Megan; Kennel, Natalie J.; Rothman, Richard E.; Peterson, Stephen; Carroll, Karen C.; Wakefield, Teresa; Ecker, David J.
2014-01-01
The rapid identification of bacteria and fungi directly from the blood of patients with suspected bloodstream infections aids in diagnosis and guides treatment decisions. The development of an automated, rapid, and sensitive molecular technology capable of detecting the diverse agents of such infections at low titers has been challenging, due in part to the high background of genomic DNA in blood. PCR followed by electrospray ionization mass spectrometry (PCR/ESI-MS) allows for the rapid and accurate identification of microorganisms but with a sensitivity of about 50% compared to that of culture when using 1-ml whole-blood specimens. Here, we describe a new integrated specimen preparation technology that substantially improves the sensitivity of PCR/ESI-MS analysis. An efficient lysis method and automated DNA purification system were designed for processing 5 ml of whole blood. In addition, PCR amplification formulations were optimized to tolerate high levels of human DNA. An analysis of 331 specimens collected from patients with suspected bloodstream infections resulted in 35 PCR/ESI-MS-positive specimens (10.6%) compared to 18 positive by culture (5.4%). PCR/ESI-MS was 83% sensitive and 94% specific compared to culture. Replicate PCR/ESI-MS testing from a second aliquot of the PCR/ESI-MS-positive/culture-negative specimens corroborated the initial findings in most cases, resulting in increased sensitivity (91%) and specificity (99%) when confirmed detections were considered true positives. The integrated solution described here has the potential to provide rapid detection and identification of organisms responsible for bloodstream infections. PMID:24951806
Caglar, Çagatay; Gul, Adem; Batur, Muhammed; Yasar, Tekin
2017-01-01
To compare the sensitivity and specificity of Moorfields regression analysis (MRA) and glaucoma probability score (GPS) between healthy and glaucomatous eyes with Heidelberg Retinal Tomograph 3 (HRT-3). The study included 120 eyes of 75 glaucoma patients and 138 eyes of 73 normal subjects, for a total of 258 eyes of 148 individuals. All measurements were performed with the HRT-3. Diagnostic test criteria (sensitivity, specificity, etc.) were used to evaluate how efficiently GPS and MRA algorithms in the HRT-3 discriminated between the glaucoma and control groups. The GPS showed 88 % sensitivity and 66 % specificity, whereas MRA had 71.5 % sensitivity and 82.5 % specificity. There was 71 % agreement between the final results of MRA and GPS in the glaucoma group. Excluding borderline patients from both analyses resulted in 91.6 % agreement. In the control group the level of agreement between MRA and GPS was 64 % including borderline patients and 84.1 % after excluding borderline patients. The accuracy rate is 92 % for MRA and 91 % for GPS in the glaucoma group excluding borderline patients. The difference was nor statistically different. In both cases, agreement was higher between MRA and GPS in the glaucoma group. We found that both sensitivity and specificity increased with disc size for MRA, while the sensitivity increased and specificity decreased with larger disc sizes for GPS. HRT is able to quantify and clearly reveal structural changes in the ONH and RNFL in glaucoma.
Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.
Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong
2018-04-01
Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.
Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi
2018-04-01
This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.
Chen, Qianqian; Xie, Qian; Zhao, Min; Chen, Bin; Gao, Shi; Zhang, Haishan; Xing, Hua; Ma, Qingjie
2015-01-01
To compare the diagnostic value of visual and semi-quantitative analysis of technetium-99m-poly-ethylene glycol, 4-arginine-glycine-aspartic acid ((99m)Tc-3PRGD2) scintimammography (SMG) for better differentiation of benign from malignant breast masses, and also investigate the incremental role of semi-quantitative index of SMG. A total of 72 patients with breast lesions were included in the study. Technetium-99m-3PRGD2 SMG was performed with single photon emission computed tomography (SPET) at 60 min after intravenous injection of 749 ± 86MBq of the radiotracer. Images were evaluated by visual interpretation and semi-quantitative indices of tumor to non-tumor (T/N) ratios, which were compared with pathology results. Receiver operating characteristics (ROC) curve analyses were performed to determine the optimal visual grade, to calculate cut-off values of semi-quantitative indices, and to compare visual and semi-quantitative diagnostic values. Among the 72 patients, 89 lesions were confirmed by histopathology after fine needle aspiration biopsy or surgery, 48 malignant and 41 benign lesions. The mean T/N ratio of (99m)Tc-3PRGD2 SMG in malignant lesions was significantly higher than that in benign lesions (P<0.05). When grade 2 of the disease was used as cut-off value for the detection of primary breast cancer, the sensitivity, specificity and accuracy were 81.3%, 70.7%, and 76.4%, respectively. When a T/N ratio of 2.01 was used as cut-off value, the sensitivity, specificity and accuracy were 79.2%, 75.6%, and 77.5%, respectively. According to ROC analysis, the area under the curve for semi-quantitative analysis was higher than that for visual analysis, but the statistical difference was not significant (P=0.372). Compared with visual analysis or semi-quantitative analysis alone, the sensitivity, specificity and accuracy of visual analysis combined with semi-quantitative analysis in diagnosing primary breast cancer were higher, being: 87.5%, 82.9%, and 85.4%, respectively. The area under the curve was 0.891. Results of the present study suggest that the semi-quantitative and visual analysis statistically showed similar results. The semi-quantitative analysis provided incremental value additive to visual analysis of (99m)Tc-3PRGD2 SMG for the detection of breast cancer. It seems from our results that, when the tumor was located in the medial part of the breast, the semi-quantitative analysis gave better diagnostic results.
Probabilistic Sensitivity Analysis of Fretting Fatigue (Preprint)
2009-04-01
AFRL-RX-WP-TP-2009-4091 PROBABILISTIC SENSITIVITY ANALYSIS OF FRETTING FATIGUE (Preprint) Patrick J. Golden, Harry R. Millwater , and...Sensitivity Analysis of Fretting Fatigue Patrick J. Golden * Air Force Research Laboratory, Wright-Patterson AFB, OH 45433 Harry R. Millwater † and
Thornton, C G; Cranfield, M R; MacLellan, K M; Brink, T L; Strandberg, J D; Carlin, E A; Torrelles, J B; Maslow, J N; Hasson, J L; Heyl, D M; Sarro, S J; Chatterjee, D; Passen, S
1999-03-01
Mycobacterium avium is the causative agent of the avian mycobacteriosis commonly known as avian tuberculosis (ATB). This infection causes disseminated disease, is difficult to diagnose, and is of serious concern because it causes significant mortality in birds. A new method was developed for processing specimens for an antemortem screening test for ATB. This novel method uses the zwitterionic detergent C18-carboxypropylbetaine (CB-18). Blood, bone marrow, bursa, and fecal specimens from 28 ducks and swabs of 20 lesions were processed with CB-18 for analysis by smear, culture, and polymerase chain reaction (PCR). Postmortem examination confirmed nine of these birds as either positive or highly suspect for disseminated disease. The sensitivities of smear, culture, and PCR, relative to postmortem analysis and independent of specimen type, were 44.4%, 88.9%, and 100%, respectively, and the specificities were 84.2%, 57.9%, and 15.8%, respectively. Reductions in specificity were due primarily to results among fecal specimens. However, these results were clustered among a subset of birds, suggesting that these tests actually identified birds in early stages of the disease. Restriction fragment length polymorphism mapping identified one strain of M. avium (serotype 1) that was isolated from lesions, bursa, bone marrow, blood, and feces of all but three of the culture-positive birds. In birds with confirmed disease, blood had the lowest sensitivity and the highest specificity by all diagnostic methods. Swabs of lesions provided the highest sensitivity by smear and culture (33.3% and 77.8%, respectively), whereas fecal specimens had the highest sensitivity by PCR (77.8%). The results of this study indicate that processing fecal specimens with CB-18, followed by PCR analysis, may provide a valuable first step for monitoring the presence of ATB in birds.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hayeon, E-mail: kimh2@upmc.edu; Gill, Beant; Beriwal, Sushil
Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account formore » model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.« less