Prediction of skin sensitization potency using machine learning approaches.
Zang, Qingda; Paris, Michael; Lehmann, David M; Bell, Shannon; Kleinstreuer, Nicole; Allen, David; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Strickland, Judy
2017-07-01
The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Occupancy Modeling for Improved Accuracy and Understanding of Pathogen Prevalence and Dynamics
Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.
2015-01-01
Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmon Oncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population: Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/ metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%. PMID:25738709
Occupancy modeling for improved accuracy and understanding of pathogen prevalence and dynamics
Colvin, Michael E.; Peterson, James T.; Kent, Michael L.; Schreck, Carl B.
2015-01-01
Most pathogen detection tests are imperfect, with a sensitivity < 100%, thereby resulting in the potential for a false negative, where a pathogen is present but not detected. False negatives in a sample inflate the number of non-detections, negatively biasing estimates of pathogen prevalence. Histological examination of tissues as a diagnostic test can be advantageous as multiple pathogens can be examined and providing important information on associated pathological changes to the host. However, it is usually less sensitive than molecular or microbiological tests for specific pathogens. Our study objectives were to 1) develop a hierarchical occupancy model to examine pathogen prevalence in spring Chinook salmonOncorhynchus tshawytscha and their distribution among host tissues 2) use the model to estimate pathogen-specific test sensitivities and infection rates, and 3) illustrate the effect of using replicate within host sampling on sample sizes required to detect a pathogen. We examined histological sections of replicate tissue samples from spring Chinook salmon O. tshawytscha collected after spawning for common pathogens seen in this population:Apophallus/echinostome metacercariae, Parvicapsula minibicornis, Nanophyetus salmincola/metacercariae, and Renibacterium salmoninarum. A hierarchical occupancy model was developed to estimate pathogen and tissue-specific test sensitivities and unbiased estimation of host- and organ-level infection rates. Model estimated sensitivities and host- and organ-level infections rates varied among pathogens and model estimated infection rate was higher than prevalence unadjusted for test sensitivity, confirming that prevalence unadjusted for test sensitivity was negatively biased. The modeling approach provided an analytical approach for using hierarchically structured pathogen detection data from lower sensitivity diagnostic tests, such as histology, to obtain unbiased pathogen prevalence estimates with associated uncertainties. Accounting for test sensitivity using within host replicate samples also required fewer individual fish to be sampled. This approach is useful for evaluating pathogen or microbe community dynamics when test sensitivity is <100%.
The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...
FABRIC FILTER MODEL SENSITIVITY ANALYSIS
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was bas...
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
Multivariate Models for Prediction of Human Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2016-01-01
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324
Zhang, Chen; Li, Ming
2012-02-01
Repeated administration of haloperidol (HAL) and olanzapine (OLZ) causes a progressively enhanced disruption of the conditioned avoidance response (CAR) and a progressively enhanced inhibition of phencyclidine (PCP)-induced hyperlocomotion in rats (termed antipsychotic sensitization). Both actions are thought to reflect intrinsic antipsychotic activity. The present study examined the extent to which antipsychotic-induced sensitization in one model (e.g. CAR) can be transferred or maintained in another (e.g. PCP hyperlocomotion) as a means of investigating the contextual and behavioral controls of antipsychotic sensitization. Well-trained male Sprague-Dawley rats were first repeatedly tested in the CAR or the PCP (3.2 mg/kg, subcutaneously) hyperlocomotion model under HAL or OLZ for 5 consecutive days. Then they were switched to the other model and tested for the expression of sensitization. Finally, all rats were switched back to the original model and retested for the expression of sensitization. Repeated HAL or OLZ treatment progressively disrupted avoidance responding and decreased PCP-induced hyperlocomotion, indicating a robust sensitization. When tested in a different model, rats previously treated with HAL or OLZ did not show a stronger inhibition of CAR-induced or PCP-induced hyperlocomotion than those treated with these drugs for the first time; however, they did show such an effect when tested in the original model in which they received repeated antipsychotic treatment. These findings suggest that the expression of antipsychotic sensitization is strongly influenced by the testing environment and/or selected behavioral response under certain experimental conditions. Distinct contextual cues and behavioral responses may develop an association with unconditional drug effects through a Pavlovian conditioning process. They may also serve as occasion setters to modulate the expression of sensitized responses. As antipsychotic sensitization mimics the clinical effects of antipsychotic treatment, understanding the neurobiological mechanisms of antipsychotic sensitization and its contextual control would greatly enhance our understanding of the psychological and neurochemical nature of antipsychotic treatment in the clinic.
Zhang, Chen; Li, Ming
2011-01-01
Repeated administration of haloperidol and olanzapine causes a progressively enhanced disruption of conditioned avoidance response (CAR) and a progressively enhanced inhibition of phencyclidine (PCP)-induced hyperlocomotion in rats (termed antipsychotic sensitization). Both actions are thought to reflect intrinsic antipsychotic activity. The present study examined to the extent to which antipsychotic-induced sensitization in one model (e.g. CAR) can be transferred or maintained in another (e.g. PCP hyperlocomotion) as a means of investigating the contextual and behavioral controls of antipsychotic sensitization. Well-trained male Sprague-Dawley rats were first repeatedly tested in the CAR or PCP (3.2 mg/kg, sc) hyperlocomotion model under haloperidol or olanzapine for five consecutive days. Then they were switched to the other model and tested for the expression of sensitization. Finally, all rats were switched back to the original model and retested for the expression of sensitization. Repeated haloperidol or olanzapine treatment progressively disrupted avoidance responding and decreased PCP-induced hyperlocomotion, indicating a robust sensitization. When tested in a different model, rats previously treated with haloperidol or olanzapine did not show a stronger inhibition of CAR or PCP-induced hyperlocomotion than those treated with these drugs for the first time; however, they did show such an effect when tested in the original model in which they received repeated antipsychotic treatment. These findings suggest that the expression of antipsychotic sensitization is strongly influenced by the testing environment and/or selected behavioral response under certain experimental conditions. Distinct contextual cues and behavioral responses may enter an association with unconditional drug effects via a Pavlovian conditioning process. They may also serve as occasion-setters to modulate the expression of sensitized responses. Because antipsychotic sensitization mimics clinical effects of antipsychotic treatment, understanding the neurobiological mechanisms of antipsychotic sensitization and its contextual control would greatly enhance our understanding of the psychological and neurochemical nature of antipsychotic treatment in the clinic. PMID:22157143
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
A closure test for time-specific capture-recapture data
Stanley, T.R.; Burnham, K.P.
1999-01-01
The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Multivariate Models for Prediction of Human Skin Sensitization ...
One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Hirota, Morihiko; Ashikaga, Takao; Kouzuki, Hirokazu
2018-04-01
It is important to predict the potential of cosmetic ingredients to cause skin sensitization, and in accordance with the European Union cosmetic directive for the replacement of animal tests, several in vitro tests based on the adverse outcome pathway have been developed for hazard identification, such as the direct peptide reactivity assay, KeratinoSens™ and the human cell line activation test. Here, we describe the development of an artificial neural network (ANN) prediction model for skin sensitization risk assessment based on the integrated testing strategy concept, using direct peptide reactivity assay, KeratinoSens™, human cell line activation test and an in silico or structure alert parameter. We first investigated the relationship between published murine local lymph node assay EC3 values, which represent skin sensitization potency, and in vitro test results using a panel of about 134 chemicals for which all the required data were available. Predictions based on ANN analysis using combinations of parameters from all three in vitro tests showed a good correlation with local lymph node assay EC3 values. However, when the ANN model was applied to a testing set of 28 chemicals that had not been included in the training set, predicted EC3s were overestimated for some chemicals. Incorporation of an additional in silico or structure alert descriptor (obtained with TIMES-M or Toxtree software) in the ANN model improved the results. Our findings suggest that the ANN model based on the integrated testing strategy concept could be useful for evaluating the skin sensitization potential. Copyright © 2017 John Wiley & Sons, Ltd.
Improving the analysis of slug tests
McElwee, C.D.
2002-01-01
This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
An evaporative and engine-cycle model for fuel octane sensitivity prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, D.P.; Taylor, A.B.
The Motor Octane Number (MON) ranks fuels by their chemical resistance to knock. Evaporative cooling coupled with fuel chemistry determine Research Octane Number (RON) antiknock ratings. It is shown in this study that fuel Octane sensitivity (numerically RON minus MON) is liked to an important difference between the two test methods; the RON test allows each fuel`s evaporative cooling characteristics to affect gas temperature, while the MON test generally eliminates this effect by pre-evaporation. In order to establish RON test charge temperatures, a computer model of fuel evaporation was adapted to Octane Engine conditions, and simulations were compared with realmore » Octane Test Engine measurements including droplet and gas temperatures. A novel gas temperature probe yielded data that corresponded well with model predictions. Tests spanned single component fuels and blends of isomers, n-paraffins, aromatics and alcohols. Commercially available automotive and aviation gasolines were also tested. A good correlation was observed between the computer predictions and measured temperature data across the range of pure fuels and blends. A numerical method to estimate the effect of precombustion temperature differences on Octane sensitivity was developed and applied to analyze these data, and was found to predict the widely disparate sensitivities of the tested fuels with accuracy. Data are presented showing mixture temperature histories of various tested fuels, and consequent sensitivity predictions. It is concluded that a fuel`s thermal-evaporative behavior gives rise to fuel Octane sensitivity as measured by differences between the RON and MON tests. This is demonstrated by the success, over a wide range of fuels, of the sensitivity predictor method describes. Evaporative cooling, must therefore be regarded as an important parameter affecting the general road performance of automobiles.« less
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Multivariate models for prediction of human skin sensitization hazard.
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2017-03-01
One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Should cell-free DNA testing be used to target antenatal rhesus immune globulin administration?
Ma, Kimberly K; Rodriguez, Maria I; Cheng, Yvonne W; Norton, Mary E; Caughey, Aaron B
2016-01-01
To compare the rates of alloimmunization with the use of cell-free DNA (cfDNA) screening to target antenatal rhesus immune globulin (RhIG) prenatally, versus routine administration of RhIG in rhesus D (RhD)-negative pregnant women in a theoretic cohort using a decision-analytic model. A decision-analytic model compared cfDNA testing to routine antenatal RhIG administration. The primary outcome was maternal sensitization to RhD antigen. Sensitivity and specificity of cfDNA testing were assumed to be 99.8% and 95.3%, respectively. Univariate and bivariate sensitivity analyses, Monte Carlo simulation, and threshold analyses were performed. In a cohort of 10,000 RhD-negative women, 22.6 sensitizations would occur with utilization of cfDNA, while 20 sensitizations would occur with routine RhIG. Only when the sensitivity of the cfDNA test reached 100%, the rate of sensitization was equal for both cfDNA and RhIG. Otherwise, routine RhIG minimized the rate of sensitization, especially given RhIG is readily available in the United States. Adoption of cfDNA testing would result in a 13.0% increase in sensitization among RhD-negative women in a theoretical cohort taking into account the ethnic diversity of the United States' population.
Roberts, David W; Patlewicz, Grace
2018-01-01
There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Methods for Evaluating Mammography Imaging Techniques
2000-06-01
independent of disease prevalence . When test outcomes are dichotomous, sensitivity and specificity measure test accuracy. Sensitivity is the...phers were not provided with the disease prevalence in the The model we use accounts for within mammographer test set. Mammographers provided one
Monochromatic Measurements of the JPSS-1 VIIRS Polarization Sensitivity
NASA Technical Reports Server (NTRS)
McIntire, Jeff; Moyer, David; Brown, Steven W.; Lykke, Keith R.; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2016-01-01
Polarization sensitivity is a critical property that must be characterized for spaceborne remote sensing instruments designed to measure reflected solar radiation. Broadband testing of the first Joint Polar-orbiting Satellite System (JPSS-1) Visible Infrared Imaging Radiometer Suite (VIIRS) showed unexpectedly large polarization sensitivities for the bluest bands on VIIRS (centered between 400 and 600 nm). Subsequent ray trace modeling indicated that large diattenuation on the edges of the bandpass for these spectral bands was the driver behind these large sensitivities. Additional testing using the National Institute of Standards and Technologies Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources was added to the test program to verify and enhance the model. The testing was limited in scope to two spectral bands at two scan angles; nonetheless, this additional testing provided valuable insight into the polarization sensitivity. Analysis has shown that the derived diattenuation agreed with the broadband measurements to within an absolute difference of about0.4 and that the ray trace model reproduced the general features of the measured data. Additionally, by deriving the spectral responsivity, the linear diattenuation is shown to be explicitly dependent on the changes in bandwidth with polarization state.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
McKim, James M; Keller, Donald J; Gorski, Joel R
2012-12-01
Chemical sensitization is a serious condition caused by small reactive molecules and is characterized by a delayed type hypersensitivity known as allergic contact dermatitis (ACD). Contact with these molecules via dermal exposure represent a significant concern for chemical manufacturers. Recent legislation in the EU has created the need to develop non-animal alternative methods for many routine safety studies including sensitization. Although most of the alternative research has focused on pure chemicals that possess reasonable solubility properties, it is important for any successful in vitro method to have the ability to test compounds with low aqueous solubility. This is especially true for the medical device industry where device extracts must be prepared in both polar and non-polar vehicles in order to evaluate chemical sensitization. The aim of this research was to demonstrate the functionality and applicability of the human reconstituted skin models (MatTek Epiderm(®) and SkinEthic RHE) as a test system for the evaluation of chemical sensitization and its potential use for medical device testing. In addition, the development of the human 3D skin model should allow the in vitro sensitization assay to be used for finished product testing in the personal care, cosmetics, and pharmaceutical industries. This approach combines solubility, chemical reactivity, cytotoxicity, and activation of the Nrf2/ARE expression pathway to identify and categorize chemical sensitizers. Known chemical sensitizers representing extreme/strong-, moderate-, weak-, and non-sensitizing potency categories were first evaluated in the skin models at six exposure concentrations ranging from 0.1 to 2500 µM for 24 h. The expression of eight Nrf2/ARE, one AhR/XRE and two Nrf1/MRE controlled gene were measured by qRT-PCR. The fold-induction at each exposure concentration was combined with reactivity and cytotoxicity data to determine the sensitization potential. The results demonstrated that both the MatTek and SkinEthic models performed in a manner consistent with data previously reported with the human keratinocyte (HaCaT) cell line. The system was tested further by evaluating chemicals known to be associated with the manufacture of medical devices. In all cases, the human skin models performed as well or better than the HaCaT cell model previously evaluated. In addition, this study identifies a clear unifying trigger that controls both the Nrf2/ARE pathway and essential biochemical events required for the development of ACD. Finally, this study has demonstrated that by utilizing human reconstructed skin models, it is possible to evaluate non-polar extracts from medical devices and low solubility finished products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Jesse E.; Baptista, António M.
A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less
De Koster, J; Hostens, M; Hermans, K; Van den Broeck, W; Opsomer, G
2016-10-01
The aim of the present research was to compare different measures of insulin sensitivity in dairy cows at the end of the dry period. To do so, 10 clinically healthy dairy cows with a varying body condition score were selected. By performing hyperinsulinemic euglycemic clamp (HEC) tests, we previously demonstrated a negative association between the insulin sensitivity and insulin responsiveness of glucose metabolism and the body condition score of these animals. In the same animals, other measures of insulin sensitivity were determined and the correlation with the HEC test, which is considered as the gold standard, was calculated. Measures derived from the intravenous glucose tolerance test (IVGTT) are based on the disappearance of glucose after an intravenous glucose bolus. Glucose concentrations during the IVGTT were used to calculate the area under the curve of glucose and the clearance rate of glucose. In addition, glucose and insulin data from the IVGTT were fitted in the minimal model to derive the insulin sensitivity parameter, Si. Based on blood samples taken before the start of the IVGTT, basal concentrations of glucose, insulin, NEFA, and β-hydroxybutyrate were determined and used to calculate surrogate indices for insulin sensitivity, such as the homeostasis model of insulin resistance, the quantitative insulin sensitivity check index, the revised quantitative insulin sensitivity check index and the revised quantitative insulin sensitivity check index including β-hydroxybutyrate. Correlation analysis revealed no association between the results obtained by the HEC test and any of the surrogate indices for insulin sensitivity. For the measures derived from the IVGTT, the area under the curve for the first 60 min of the test and the Si derived from the minimal model demonstrated good correlation with the gold standard. Copyright © 2016 Elsevier Inc. All rights reserved.
QSAR models of human data can enrich or replace LLNA testing for human skin sensitization
Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2016-01-01
Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595
Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei
2010-07-01
In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Lipford, William E.; Leighty, Bradley D.; Goodman, Kyle Z.; Goad, William K.; Goad, Linda R.
2011-01-01
This report will serve to present results of a test of the pressure sensitive paint (PSP) technique on the Common Research Model (CRM). This test was conducted at the National Transonic Facility (NTF) at NASA Langley Research Center. PSP data was collected on several surfaces with the tunnel operating in both cryogenic mode and standard air mode. This report will also outline lessons learned from the test as well as possible approaches to challenges faced in the test that can be applied to later entries.
James D. Wickham; Robert V. O' Neill; Kurt H. Riitters; Timothy G. Wade; K. Bruce Jones
1997-01-01
Calculation of landscape metrics from land-cover data is becoming increasingly common. Some studies have shown that these measurements are sensitive to differences in land-cover composition, but none are known to have tested also their a sensitivity to land-cover misclassification. An error simulation model was written to test the sensitivity of selected land-scape...
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Lifrani, Awatif; Dos Santos, Jacinthe; Dubarry, Michel; Rautureau, Michelle; Blachier, Francois; Tome, Daniel
2009-01-28
Food allergy can cause food-related anaphylaxis. Food allergen labeling is the principal means of protecting sensitized individuals. This motivated European Directive 2003/89 on the labeling of ingredients or additives that could trigger adverse reactions, which has been in effect since 2005. During this study, we developed animal models with allergy to ovalbumin, caseinate, and isinglass in order to be able to detect fining agent residues that could induce anaphylactic reactions in sensitized mice. The second aim of the study was to design sandwich ELISA tests specific to each fining agent in order to detect their residue antigenicity, both during wine processing and in commercially available bottled wines. Sensitized mice and sandwich ELISA methods were established to test a vast panel of wines. The results showed that although they were positive to our highly sensitive sandwich-ELISA tests, some commercially available wines are not allergenic in sensitized mice. Commercially available bottled wines made using standardized processes, fining, maturation, and filtration, do not therefore represent any risk of anaphylactic reactions in sensitized mice.
NASA Technical Reports Server (NTRS)
Walter, Bernadette P.; Heimann, Martin
1999-01-01
Methane emissions from natural wetlands constitutes the largest methane source at present and depends highly on the climate. In order to investigate the response of methane emissions from natural wetlands to climate variations, a 1-dimensional process-based climate-sensitive model to derive methane emissions from natural wetlands is developed. In the model the processes leading to methane emission are simulated within a 1-dimensional soil column and the three different transport mechanisms diffusion, plant-mediated transport and ebullition are modeled explicitly. The model forcing consists of daily values of soil temperature, water table and Net Primary Productivity, and at permafrost sites the thaw depth is included. The methane model is tested using observational data obtained at 5 wetland sites located in North America, Europe and Central America, representing a large variety of environmental conditions. It can be shown that in most cases seasonal variations in methane emissions can be explained by the combined effect of changes in soil temperature and the position of the water table. Our results also show that a process-based approach is needed, because there is no simple relationship between these controlling factors and methane emissions that applies to a variety of wetland sites. The sensitivity of the model to the choice of key model parameters is tested and further sensitivity tests are performed to demonstrate how methane emissions from wetlands respond to climate variations.
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.
2017-12-01
Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Benchmarking an unstructured grid sediment model in an energetic estuary
Lopez, Jesse E.; Baptista, António M.
2016-12-14
A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less
NASA Astrophysics Data System (ADS)
Noacco, V.; Wagener, T.; Pianosi, F.; Philp, T.
2017-12-01
Insurance companies provide insurance against a wide range of threats, such as natural catastrophes, nuclear incidents and terrorism. To quantify risk and support investment decisions, mathematical models are used, for example to set the premiums charged to clients that protect from financial loss, should deleterious events occur. While these models are essential tools for adequately assessing the risk attached to an insurer's portfolio, their development is costly and their value for decision-making may be limited by an incomplete understanding of uncertainty and sensitivity. Aside from the business need to understand risk and uncertainty, the insurance sector also faces regulation which requires them to test their models in such a way that uncertainties are appropriately captured and that plans are in place to assess the risks and their mitigation. The building and testing of models constitutes a high cost for insurance companies, and it is a time intensive activity. This study uses an established global sensitivity analysis toolbox (SAFE) to more efficiently capture the uncertainties and sensitivities embedded in models used by a leading re/insurance firm, with structured approaches to validate these models and test the impact of assumptions on the model predictions. It is hoped that this in turn will lead to better-informed and more robust business decisions.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
Hood, Donald C
2007-05-01
Glaucoma causes damage to the retinal ganglion cells and their axons, and this damage can be detected with both structural and functional tests. The purpose of this study was to better understand the relationship between a structural measure of retinal nerve fiber layer (RNFL) and the most common functional test, behavioral sensitivity with static automated perimetry (SAP). First, a linear model, previously shown to describe the relationship between local visual evoked potentials and SAP sensitivity, was modified to predict the change in RNFL as measured by optical coherence tomography. Second, previous work by others was shown to be consistent with this model.
Cross-borehole slug test analysis in a fractured limestone aquifer
NASA Astrophysics Data System (ADS)
Audouin, Olivier; Bodin, Jacques
2008-01-01
SummaryThis work proposes new semi-analytical solutions for the interpretation of cross-borehole slug tests in fractured media. Our model is an extension of a previous work by Barker [Barker, J.A., 1988. A generalized radial flow model for hydraulic tests in fractured rock. Water Resources Research 24 (10), 1796-1804; Butler Jr., J.J., Zhan X., 2004. Hydraulic tests in highly permeable aquifers. Water Resources Research 40, W12402. doi:10.1029/2003/WR002998]. It includes inertial effects at both test and observation wells and a fractional flow dimension in the aquifer. The model has five fitting parameters: flow dimension n, hydraulic conductivity K, specific storage coefficient Ss, and effective lengths of test well Le and of observation well Leo. The results of a sensitivity analysis show that the most sensitive parameter is the flow dimension n. The model sensitivity to other parameters may be ranked as follows: K > Le ˜ Leo > Ss. The sensitivity to aquifer storage remains one or two orders of magnitude lower than that to other parameters. The model has been coupled to an automatic inversion algorithm for facilitating the interpretation of real field data. This inversion algorithm is based on a Gauss-Newton optimization procedure conditioned by re-scaled sensitivities. It has been used to interpret successfully cross-borehole slug test data from the Hydrogeological Experimental Site (HES) of Poitiers, France, consisting of fractured and karstic limestones. HES data provide flow dimension values ranging between 1.6 and 2.5, and hydraulic conductivity values ranging between 4.4 × 10 -5 and 7.7 × 10 -4 m s -1. These values are consistent with previous interpretations of single-well slug tests. The results of the sensitivity analysis are confirmed by calculations of relative errors on parameter estimates, which show that accuracy on n and K is below 20% and that on Ss is about one order of magnitude. The K-values interpreted from cross-borehole slug tests are one order of magnitude higher than those previously interpreted from interference pumping tests. These findings suggest that cross-borehole slug tests focus on preferential flowpath networks made by fractures and karstic channels, i.e. the head perturbation induced by a slug test propagates only through those flowpaths with the lowest hydraulic resistance. As a result, cross-borehole slug tests are expected to identify the hydrodynamic properties of karstic-channels and fracture flowpaths, and may be considered as complementary to pumping tests which more likely provide bulk properties of the whole fracture/karstic-channel/matrix system.
Integrated Decision Strategies for Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Kleinstreuer, Nicole; Paris, Michael; Lehmann, David M.; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Lowit, Anna; Allen, David; Casey, Warren
2016-01-01
One of the top priorities of ICCVAM is the identification and evaluation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events of the process have been well characterized in an adverse outcome pathway (AOP) proposed by OECD. Accordingly, ICCVAM is working to develop integrated decision strategies based on the AOP using in vitro, in chemico, and in silico information. Data were compiled for 120 substances tested in the murine local lymph node assay (LLNA), direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens assay. Data for six physicochemical properties that may affect skin penetration were also collected, and skin sensitization read-across predictions were performed using OECD QSAR Toolbox. All data were combined into a variety of potential integrated decision strategies to predict LLNA outcomes using a training set of 94 substances and an external test set of 26 substances. Fifty-four models were built using multiple combinations of machine learning approaches and predictor variables. The seven models with the highest accuracy (89–96% for the test set and 96–99% for the training set) for predicting LLNA outcomes used a support vector machine (SVM) approach with different combinations of predictor variables. The performance statistics of the SVM models were higher than any of the non-animal tests alone and higher than simple test battery approaches using these methods. These data suggest that computational approaches are promising tools to effectively integrate data sources to identify potential skin sensitizers without animal testing. PMID:26851134
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F
2009-06-01
Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Meta-analysis of diagnostic accuracy studies in mental health
Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J
2015-01-01
Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042
The Ohio Contrast Cards: Visual Performance in a Pediatric Low-vision Site
Hopkins, Gregory R.; Dougherty, Bradley E.; Brown, Angela M.
2017-01-01
SIGNIFICANCE This report describes the first clinical use of the Ohio Contrast Cards, a new test that measures the maximum spatial contrast sensitivity of low-vision patients who cannot recognize and identify optotypes and for whom the spatial frequency of maximum contrast sensitivity is unknown. PURPOSE To compare measurements of the Ohio Contrast Cards to measurements of three other vision tests and a vision-related quality-of-life questionnaire obtained on partially sighted students at Ohio State School for the Blind. METHODS The Ohio Contrast Cards show printed square-wave gratings at very low spatial frequency (0.15 cycle/degree). The patient looks to the left/right side of the card containing the grating. Twenty-five students (13 to 20 years old) provided four measures of visual performance: two grating card tests (the Ohio Contrast Cards and the Teller Acuity Cards) and two letter charts (the Pelli-Robson contrast chart and the Bailey-Lovie acuity chart). Spatial contrast sensitivity functions were modeled using constraints from the grating data. The Impact of Vision Impairment on Children questionnaire measured vision-related quality of life. RESULTS Ohio Contrast Card contrast sensitivity was always less than 0.19 log10 units below the maximum possible contrast sensitivity predicted by the model; average Pelli-Robson letter contrast sensitivity was near the model prediction, but 0.516 log10 units below the maximum. Letter acuity was 0.336 logMAR below the grating acuity results. The model estimated the best testing distance in meters for optimum Pelli-Robson contrast sensitivity from the Bailey-Lovie acuity as distance = 1.5 − logMAR for low-vision patients. Of the four vision tests, only Ohio Contrast Card contrast sensitivity was independently and statistically significantly correlated with students' quality of life. CONCLUSIONS The Ohio Contrast Cards combine a grating stimulus, a looking indicator behavior, and contrast sensitivity measurement. They show promise for the clinical objective of advising the patient and his/her caregivers about the success the patient is likely to enjoy in tasks of everyday life. PMID:28972542
Unified Model Deformation and Flow Transition Measurements
NASA Technical Reports Server (NTRS)
Burner, Alpheus W.; Liu, Tianshu; Garg, Sanjay; Bell, James H.; Morgan, Daniel G.
1999-01-01
The number of optical techniques that may potentially be used during a given wind tunnel test is continually growing. These include parameter sensitive paints that are sensitive to temperature or pressure, several different types of off-body and on-body flow visualization techniques, optical angle-of-attack (AoA), optical measurement of model deformation, optical techniques for determining density or velocity, and spectroscopic techniques for determining various flow field parameters. Often in the past the various optical techniques were developed independently of each other, with little or no consideration for other techniques that might also be used during a given test. Recently two optical techniques have been increasingly requested for production measurements in NASA wind tunnels. These are the video photogrammetric (or videogrammetric) technique for measuring model deformation known as the video model deformation (VMD) technique, and the parameter sensitive paints for making global pressure and temperature measurements. Considerations for, and initial attempts at, simultaneous measurements with the pressure sensitive paint (PSP) and the videogrammetric techniques have been implemented. Temperature sensitive paint (TSP) has been found to be useful for boundary-layer transition detection since turbulent boundary layers convect heat at higher rates than laminar boundary layers of comparable thickness. Transition is marked by a characteristic surface temperature change wherever there is a difference between model and flow temperatures. Recently, additional capabilities have been implemented in the target-tracking videogrammetric measurement system. These capabilities have permitted practical simultaneous measurements using parameter sensitive paint and video model deformation measurements that led to the first successful unified test with TSP for transition detection in a large production wind tunnel.
VIIRS-J1 Polarization Narrative
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; McCorkel, Joel; McIntire, Jeff; Moyer, David; McAndrew, Brendan; Brown, Steven W.; Lykke, Keith; Butler, James; Meister, Gerhard; Thome, Kurtis J.
2015-01-01
The VIS/NIR bands polarization sensitivity of Joint Polar Satellite Sensor 1 (JPSS1) Visible/Infrared Imaging Radiometer Suite (VIIRS) instrument was measured using a broadband source. While polarization sensitivity for bands M5-M7, I1, and I2 was less than 2.5%, the maximum polarization sensitivity for bands M1, M2, M3, and M4 was measured to be 6.4%, 4.4%, 3.1%, and 4.3%, respectively with a polarization characterization uncertainty of less than 0.3%. A detailed polarization model indicated that the large polarization sensitivity observed in the M1 to M4 bands was mainly due to the large polarization sensitivity introduced at the leading and trailing edges of the newly manufactured VISNIR bandpass focal plane filters installed in front of the VISNIR detectors. This was confirmed by polarization measurements of bands M1 and M4 bands using monochromatic light. Discussed are the activities leading up to and including the instruments two polarization tests, some discussion of the polarization model and the model results, the role of the focal plane filters, the polarization testing of the Aft-Optics-Assembly, the testing of the polarizers at Goddard and NIST and the use of NIST's T-SIRCUS for polarization testing and associated analyses and results.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Buck, Gregory M.; Leighty, Bradley D.; Lipford, William E.; Oglesby, Donald M.
2008-01-01
Pressure Sensitive Paint (PSP) and Temperature Sensitive Paint (TSP) were used to visualize and quantify the surface interactions of reaction control system (RCS) jets on the aft body of capsule reentry vehicle shapes. The first model tested was an Apollo-like configuration and was used to focus primarily on the effects of the forward facing roll and yaw jets. The second model tested was an early Orion Crew Module configuration blowing only out of its forward-most yaw jet, which was expected to have the most intense aerodynamic heating augmentation on the model surface. This paper will present the results from the experiments, which show that with proper system design, both PSP and TSP are effective tools for studying these types of interaction in hypersonic testing environments.
Testing and modeling of PBX-9591 shock initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Kim; Foley, Timothy; Novak, Alan
2010-01-01
This paper describes an ongoing effort to develop a detonation sensitivity test for PBX-9501 that is suitable for studying pristine and damaged HE. The approach involves testing and comparing the sensitivities of HE pressed to various densities and those of pre-damaged samples with similar porosities. The ultimate objectives are to understand the response of pre-damaged HE to shock impacts and to develop practical computational models for use in system analysis codes for HE safety studies. Computer simulation with the CTH shock physics code is used to aid the experimental design and analyze the test results. In the calculations, initiation andmore » growth or failure of detonation are modeled with the empirical HVRB model. The historical LANL SSGT and LSGT were reviewed and it was determined that a new, modified gap test be developed to satisfy the current requirements. In the new test, the donor/spacer/acceptor assembly is placed in a holder that is designed to work with fixtures for pre-damaging the acceptor sample. CTH simulations were made of the gap test with PBX-9501 samples pressed to three different densities. The calculated sensitivities were validated by test observations. The agreement between the computed and experimental critical gap thicknesses, ranging from 9 to 21 mm under various test conditions, is well within 1 mm. These results show that the numerical modeling is a valuable complement to the experimental efforts in studying and understanding shock initiation of PBX-9501.« less
Synthetic Modifications In the Frequency Domain for Finite Element Model Update and Damage Detection
2017-09-01
Sensitivity-based finite element model updating and structural damage detection has been limited by the number of modes available in a vibration test and...increase the number of modes and corresponding sensitivity data by artificially constraining the structure under test, producing a large number of... structural modifications to the measured data, including both springs-to-ground and mass modifications. This is accomplished with frequency domain
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
McKim, James M.; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa
2016-01-01
Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose–response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimension-ality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals’ potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced "false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. PMID:26046447
Luechtefeld, Thomas; Maertens, Alexandra; McKim, James M; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa
2015-11-01
Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. Copyright © 2015 John Wiley & Sons, Ltd.
Li, Yi; Tseng, Yufeng J.; Pan, Dahua; Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Hopfinger, Anton J.
2008-01-01
Currently, the only validated methods to identify skin sensitization effects are in vivo models, such as the Local Lymph Node Assay (LLNA) and guinea pig studies. There is a tremendous need, in particular due to novel legislation, to develop animal alternatives, eg. Quantitative Structure-Activity Relationship (QSAR) models. Here, QSAR models for skin sensitization using LLNA data have been constructed. The descriptors used to generate these models are derived from the 4D-molecular similarity paradigm and are referred to as universal 4D-fingerprints. A training set of 132 structurally diverse compounds and a test set of 15 structurally diverse compounds were used in this study. The statistical methodologies used to build the models are logistic regression (LR), and partial least square coupled logistic regression (PLS-LR), which prove to be effective tools for studying skin sensitization measures expressed in the two categorical terms of sensitizer and non-sensitizer. QSAR models with low values of the Hosmer-Lemeshow goodness-of-fit statistic, χHL2, are significant and predictive. For the training set, the cross-validated prediction accuracy of the logistic regression models ranges from 77.3% to 78.0%, while that of PLS-logistic regression models ranges from 87.1% to 89.4%. For the test set, the prediction accuracy of logistic regression models ranges from 80.0%-86.7%, while that of PLS-logistic regression models ranges from 73.3%-80.0%. The QSAR models are made up of 4D-fingerprints related to aromatic atoms, hydrogen bond acceptors and negatively partially charged atoms. PMID:17226934
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
A Novel Testing Model for Opportunistic Screening of Pre-Diabetes and Diabetes among U.S. Adults
Zhang, Yurong; Hu, Gang; Zhang, Lu; Mayo, Rachel; Chen, Liwei
2015-01-01
Objective The study aim was to evaluate the performance of a novel simultaneous testing model, based on the Finnish Diabetes Risk Score (FINDRISC) and HbA1c, in detecting undiagnosed diabetes and pre-diabetes in Americans. Research Design and Methods This cross-sectional analysis included 3,886 men and women (≥ 20 years) without known diabetes from the U.S. National Health and Nutrition Examination Survey (NHANES) 2005-2010. The FINDRISC was developed based on eight variables (age, BMI, waist circumference, use of antihypertensive drug, history of high blood glucose, family history of diabetes, daily physical activity and fruit & vegetable intake). The sensitivity, specificity, and the receiver operating characteristic (ROC) curve of the testing model were calculated for undiagnosed diabetes and pre-diabetes, determined by oral glucose tolerance test (OGTT). Results The prevalence of undiagnosed diabetes was 7.0% and 43.1% for pre-diabetes (27.7% for isolated impaired fasting glucose (IFG), 5.1% for impaired glucose tolerance (IGT), and 10.3% for having both IFG and IGT). The sensitivity and specificity of using the HbA1c alone was 24.2% and 99.6% for diabetes (cutoff of ≥6.5%), and 35.2% and 86.4% for pre-diabetes (cutoff of ≥5.7%). The sensitivity and specificity of using the FINDRISC alone (cutoff of ≥9) was 79.1% and 48.6% for diabetes and 60.2% and 61.4% for pre-diabetes. Using the simultaneous testing model with a combination of FINDRISC and HbA1c improved the sensitivity to 84.2% for diabetes and 74.2% for pre-diabetes. The specificity for the simultaneous testing model was 48.4% of diabetes and 53.0% for pre-diabetes. Conclusions This simultaneous testing model is a practical and valid tool in diabetes screening in the general U.S. population. PMID:25790106
Non-animal sensitization testing: state-of-the-art.
Vandebriel, Rob J; van Loveren, Henk
2010-05-01
Predictive tests to identify the sensitizing properties of chemicals are carried out using animals. In the European Union timelines for phasing out many standard animal tests were established for cosmetics. Following this policy, the new European Chemicals Legislation (REACH) favors alternative methods, if validated and appropriate. In this review the authors aim to provide a state-of-the art overview of alternative methods (in silico, in chemico, and in vitro) to identify contact and respiratory sensitizing capacity and in some occasions give a measure of potency. The past few years have seen major advances in QSAR (quantitative structure-activity relationship) models where especially mechanism-based models have great potential, peptide reactivity assays where multiple parameters can be measured simultaneously, providing a more complete reactivity profile, and cell-based assays. Several cell-based assays are in development, not only using different cell types, but also several specifically developed assays such as three-dimenionally (3D)-reconstituted skin models, an antioxidant response reporter assay, determination of signaling pathways, and gene profiling. Some of these assays show relatively high sensitivity and specificity for a large number of sensitizers and should enter validation (or are indeed entering this process). Integrating multiple assays in a decision tree or integrated testing system is a next step, but has yet to be developed. Adequate risk assessment, however, is likely to require significantly more time and efforts.
Recognizing patterns of visual field loss using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher
2014-03-01
Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
Previous studies indicate that freshwater mollusks are more sensitive than commonly tested organisms to some chemicals, such as copper and ammonia. Nevertheless, mollusks are generally under-represented in toxicity databases. Studies are needed to generate data with which to comp...
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal models such as the Local Lymph Node Assay (LLNA) and the Guinea Pig Maximization Test (GPMT). In recent years, EU regulations have provided a strong incentiv...
A wave model test bed study for wave energy resource characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Neary, Vincent S.; Wang, Taiping
This paper presents a test bed study conducted to evaluate best practices in wave modeling to characterize energy resources. The model test bed off the central Oregon Coast was selected because of the high wave energy and available measured data at the site. Two third-generation spectral wave models, SWAN and WWIII, were evaluated. A four-level nested-grid approach—from global to test bed scale—was employed. Model skills were assessed using a set of model performance metrics based on comparing six simulated wave resource parameters to observations from a wave buoy inside the test bed. Both WWIII and SWAN performed well at themore » test bed site and exhibited similar modeling skills. The ST4 package with WWIII, which represents better physics for wave growth and dissipation, out-performed ST2 physics and improved wave power density and significant wave height predictions. However, ST4 physics tended to overpredict the wave energy period. The newly developed ST6 physics did not improve the overall model skill for predicting the six wave resource parameters. Sensitivity analysis using different wave frequencies and direction resolutions indicated the model results were not sensitive to spectral resolutions at the test bed site, likely due to the absence of complex bathymetric and geometric features.« less
Constitutive equation of friction based on the subloading-surface concept
Ueno, Masami; Kuwayama, Takuya; Suzuki, Noriyuki; Yonemura, Shigeru; Yoshikawa, Nobuo
2016-01-01
The subloading-friction model is capable of describing static friction, the smooth transition from static to kinetic friction and the recovery to static friction after sliding stops or sliding velocity decreases. This causes a negative rate sensitivity (i.e. a decrease in friction resistance with increasing sliding velocity). A generalized subloading-friction model is formulated in this article by incorporating the concept of overstress for viscoplastic sliding velocity into the subloading-friction model to describe not only negative rate sensitivity but also positive rate sensitivity (i.e. an increase in friction resistance with increasing sliding velocity) at a general sliding velocity ranging from quasi-static to impact sliding. The validity of the model is verified by numerical experiments and comparisons with test data obtained from friction tests using a lubricated steel specimen. PMID:27493570
Sun, Hao; Dul, Mitchell W; Swanson, William H
2006-07-01
The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli.
Campos, Nicole G.; Castle, Philip E.; Wright, Thomas C.; Kim, Jane J.
2016-01-01
As cervical cancer screening programs are implemented in low-resource settings, protocols are needed to maximize health benefits under operational constraints. Our objective was to develop a framework for examining health and economic tradeoffs between screening test sensitivity, population coverage, and follow-up of screen-positive women, to help decision makers identify where program investments yield the greatest value. As an illustrative example, we used an individual-based Monte Carlo simulation model of the natural history of human papillomavirus (HPV) and cervical cancer calibrated to epidemiologic data from Uganda. We assumed once in a lifetime screening at age 35 with two-visit HPV DNA testing or one-visit visual inspection with acetic acid (VIA). We assessed the health and economic tradeoffs that arise between 1) test sensitivity and screening coverage; 2) test sensitivity and loss to follow-up (LTFU) of screen-positive women; and 3) test sensitivity, screening coverage, and LTFU simultaneously. The decline in health benefits associated with sacrificing HPV DNA test sensitivity by 20% (e.g., shifting from provider- to self-collection of specimens) could be offset by gains in coverage if coverage increased by at least 20%. When LTFU was 10%, two-visit HPV DNA testing with 80-90% sensitivity was more effective and more cost-effective than one-visit VIA with 40% sensitivity, and yielded greater health benefits than VIA even as VIA sensitivity increased to 60% and HPV test sensitivity declined to 70%. As LTFU increased, two-visit HPV DNA testing became more costly and less effective than one-visit VIA. Setting-specific data on achievable test sensitivity, coverage, follow-up rates, and programmatic costs are needed to guide programmatic decision making for cervical cancer screening. PMID:25943074
Multivariate models for skin sensitization hazard and potency
One of the top priorities being addressed by ICCVAM is the identification and validation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events have been well characterized in an adverse outcome pathw...
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria
2015-04-01
Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn; Burford, Janessa
2012-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite . a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Burford, Janessa
2013-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, R.; Klemm, H.A.; Battye, W.
1979-04-01
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
El Allaki, Farouk; Harrington, Noel; Howden, Krista
2016-11-01
The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Technique for Measuring Speed and Visual Motion Sensitivity in Lizards
ERIC Educational Resources Information Center
Woo, Kevin L.; Burke, Darren
2008-01-01
Testing sensory characteristics on herpetological species has been difficult due to a range of properties related to physiology, responsiveness, performance ability, and the type of reinforcer used. Using the Jacky lizard as a model, we outline a successfully established procedure in which to test the visual sensitivity to motion characteristics.…
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Data sensitivity in a hybrid STEP/Coulomb model for aftershock forecasting
NASA Astrophysics Data System (ADS)
Steacy, S.; Jimenez Lloret, A.; Gerstenberger, M.
2014-12-01
Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip, and we also examine how the choice of receiver plane geometry affects the results. We find that the results are strongly sensitive to the slip models and moderately sensitive to the choice of receiver orientation. We further find that comparison of the stress fields (resulting from the slip models) with the location of events in the learning period provides advance information on whether or not a particular hybrid model will perform better than STEP.
VIIRS/J1 polarization narrative
NASA Astrophysics Data System (ADS)
Waluschka, Eugene; McCorkel, Joel; McIntire, Jeff; Moyer, David; McAndrew, Brendan; Brown, Steven W.; Lykke, Keith R.; Young, James B.; Fest, Eric; Butler, James; Wang, Tung R.; Monroy, Eslim O.; Turpie, Kevin; Meister, Gerhard; Thome, Kurtis J.
2015-09-01
The polarization sensitivity of the Visible/NearIR (VISNIR) bands in the Joint Polar Satellite Sensor 1 (J1) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was measured using a broadband source. While polarization sensitivity for bands M5-M7, I1, and I2 was less than 2.5 %, the maximum polarization sensitivity for bands M1, M2, M3, and M4 was measured to be 6.4 %, 4.4 %, 3.1 %, and 4.3 %, respectively with a polarization characterization uncertainty of less than 0.38%. A detailed polarization model indicated that the large polarization sensitivity observed in the M1 to M4 bands is mainly due to the large polarization sensitivity introduced at the leading and trailing edges of the newly manufactured VISNIR bandpass focal plane filters installed in front of the VISNIR detectors. This was confirmed by polarization measurements of bands M1 and M4 bands using monochromatic light. Discussed are the activities leading up to and including the two polarization tests, some discussion of the polarization model and the model results, the role of the focal plane filters, the polarization testing of the Aft-Optics-Assembly, the testing of the polarizers at the National Aeronautics and Space Administration's (NASA) Goddard center and at the National Institute of Science and Technology (NIST) facility and the use of NIST's Traveling Spectral Irradiance and Radiance responsivity Calibrations using Uniform Sources (T-SIRCUS) for polarization testing and associated analyses and results.
Integrated decision strategies for skin sensitization hazard.
Strickland, Judy; Zang, Qingda; Kleinstreuer, Nicole; Paris, Michael; Lehmann, David M; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Lowit, Anna; Allen, David; Casey, Warren
2016-09-01
One of the top priorities of the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM) is the identification and evaluation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events of the process have been well characterized in an adverse outcome pathway (AOP) proposed by the Organisation for Economic Co-operation and Development (OECD). Accordingly, ICCVAM is working to develop integrated decision strategies based on the AOP using in vitro, in chemico and in silico information. Data were compiled for 120 substances tested in the murine local lymph node assay (LLNA), direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens assay. Data for six physicochemical properties, which may affect skin penetration, were also collected, and skin sensitization read-across predictions were performed using OECD QSAR Toolbox. All data were combined into a variety of potential integrated decision strategies to predict LLNA outcomes using a training set of 94 substances and an external test set of 26 substances. Fifty-four models were built using multiple combinations of machine learning approaches and predictor variables. The seven models with the highest accuracy (89-96% for the test set and 96-99% for the training set) for predicting LLNA outcomes used a support vector machine (SVM) approach with different combinations of predictor variables. The performance statistics of the SVM models were higher than any of the non-animal tests alone and higher than simple test battery approaches using these methods. These data suggest that computational approaches are promising tools to effectively integrate data sources to identify potential skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
The atopic dog as a model of peanut and tree nut food allergy.
Teuber, Suzanne S; Del Val, Gregorio; Morigasaki, Susumu; Jung, Hye Rim; Eisele, Pamela H; Frick, Oscar L; Buchanan, Bob B
2002-12-01
Animal models are needed that mimic human IgE-mediated peanut and tree nut allergy. Atopic dogs have been previously used in a model of food allergy to cow's milk, beef, wheat, and soy, with the demonstration of specific IgE production and positive oral challenges similar to those seen in human subjects. We sought to sensitize dogs to peanut, walnut, and Brazil nut and to assess whether sensitization is accompanied by clinical reactions and whether there is cross-reactivity among the different preparations. Eleven dogs were sensitized subcutaneously by using an established protocol with 1 microg each of peanut, English walnut, or Brazil nut protein extracts in alum first at birth and then after modified live virus vaccinations at 3, 7, and 11 weeks of age. The dogs were sensitized to other allergens, including soy and either wheat or barley. Intradermal skin tests, IgE immunoblotting to nut proteins, and oral challenges were performed with ground nut preparations. At 6 months of age, the dogs' intradermal skin test responses were positive to the nut extracts. IgE immunoblotting to peanut, walnut, and Brazil nut showed strong recognition of proteins in the aqueous preparations. Each of the 4 peanut- and the 3 Brazil nut-sensitized dogs and 3 of the 4 walnut-sensitized dogs reacted on oral challenge with the corresponding primary immunogen at age 2 years. None of the peanut-sensitized dogs reacted clinically with walnut or Brazil nut challenges. One of the walnut-sensitized dogs had delayed (overnight) vomiting to Brazil nut. On the basis of measurements of the mean amount of allergen eliciting a skin test response in dogs, the hierarchy of reactivity by skin testing is similar to the clinical experience in human subjects (peanut > tree nuts > wheat > soy > barley). Cross-reactivity, which was not apparent between soy and peanut or tree nuts or between peanut and tree nuts, was slight between walnut and Brazil nut. The results give further support to the dog as a model of human food allergy.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John
2012-01-01
A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. Pain sensitivity significantly mediated the relationship between stress and headache intensity. The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed.
van Wagenberg, Coen P A; Backus, Gé B C; Wisselink, Henk J; van der Vorst, Jack G A J; Urlings, Bert A P
2013-09-01
In this paper we analyze the impact of the sensitivity and specificity of a Mycobacterium avium (Ma) test on pig producer incentives to control Ma in finishing pigs. A possible Ma control system which includes a serodiagnostic test and a penalty on finishing pigs in herds detected with Ma infection was modelled. Using a dynamic optimization model and a grid search of deliveries of herds from pig producers to slaughterhouse, optimal control measures for pig producers and optimal penalty values for deliveries with increased Ma risk were identified for different sensitivity and specificity values. Results showed that higher sensitivity and lower specificity induced use of more intense control measures and resulted in higher pig producer costs and lower Ma seroprevalence. The minimal penalty value needed to comply with a threshold for Ma seroprevalence in finishing pigs at slaughter was lower at higher sensitivity and lower specificity. With imperfect specificity a larger sample size decreased pig producer incentives to control Ma seroprevalence, because the higher number of false positives resulted in an increased probability of rejecting a batch of finishing pigs irrespective of whether the pig producer applied control measures. We conclude that test sensitivity and specificity must be considered in incentive system design to induce pig producers to control Ma in finishing pigs with minimum negative effects. Copyright © 2013 Elsevier B.V. All rights reserved.
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
Blinded Validation of Breath Biomarkers of Lung Cancer, a Potential Ancillary to Chest CT Screening
Phillips, Michael; Bauer, Thomas L.; Cataneo, Renee N.; Lebauer, Cassie; Mundada, Mayur; Pass, Harvey I.; Ramakrishna, Naren; Rom, William N.; Vallières, Eric
2015-01-01
Background Breath volatile organic compounds (VOCs) have been reported as biomarkers of lung cancer, but it is not known if biomarkers identified in one group can identify disease in a separate independent cohort. Also, it is not known if combining breath biomarkers with chest CT has the potential to improve the sensitivity and specificity of lung cancer screening. Methods Model-building phase (unblinded): Breath VOCs were analyzed with gas chromatography mass spectrometry in 82 asymptomatic smokers having screening chest CT, 84 symptomatic high-risk subjects with a tissue diagnosis, 100 without a tissue diagnosis, and 35 healthy subjects. Multiple Monte Carlo simulations identified breath VOC mass ions with greater than random diagnostic accuracy for lung cancer, and these were combined in a multivariate predictive algorithm. Model-testing phase (blinded validation): We analyzed breath VOCs in an independent cohort of similar subjects (n = 70, 51, 75 and 19 respectively). The algorithm predicted discriminant function (DF) values in blinded replicate breath VOC samples analyzed independently at two laboratories (A and B). Outcome modeling: We modeled the expected effects of combining breath biomarkers with chest CT on the sensitivity and specificity of lung cancer screening. Results Unblinded model-building phase. The algorithm identified lung cancer with sensitivity 74.0%, specificity 70.7% and C-statistic 0.78. Blinded model-testing phase: The algorithm identified lung cancer at Laboratory A with sensitivity 68.0%, specificity 68.4%, C-statistic 0.71; and at Laboratory B with sensitivity 70.1%, specificity 68.0%, C-statistic 0.70, with linear correlation between replicates (r = 0.88). In a projected outcome model, breath biomarkers increased the sensitivity, specificity, and positive and negative predictive values of chest CT for lung cancer when the tests were combined in series or parallel. Conclusions Breath VOC mass ion biomarkers identified lung cancer in a separate independent cohort, in a blinded replicated study. Combining breath biomarkers with chest CT could potentially improve the sensitivity and specificity of lung cancer screening. Trial Registration ClinicalTrials.gov NCT00639067 PMID:26698306
Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris
2015-01-01
The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.
DUL, MITCHELL W.; SWANSON, WILLIAM H.
2006-01-01
Purposes The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Methods Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. Results The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Conclusions Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli. PMID:16840860
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Farreny, Aida; Del Rey-Mejías, Ángel; Escartin, Gemma; Usall, Judith; Tous, Núria; Haro, Josep Maria; Ochoa, Susana
2016-07-01
Schizophrenia involves marked motivational and learning deficits that may reflect abnormalities in reward processing. The purpose of this study was to examine positive and negative feedback sensitivity in schizophrenia using computational modeling derived from the Wisconsin Card Sorting Test (WCST). We also aimed to explore feedback sensitivity in a sample with bipolar disorder. Eighty-three individuals with schizophrenia and 27 with bipolar disorder were included. Demographic, clinical and cognitive outcomes, together with the WCST, were considered in both samples. Computational modeling was performed using the R syntax to calculate 3 parameters based on trial-by-trial execution on the WCST: reward sensitivity (R), punishment sensitivity (P), and choice consistency (D). The associations between outcome variables and the parameters were investigated. Positive and negative sensitivity showed deficits, but P parameter was clearly diminished in schizophrenia. Cognitive variables, age, and symptoms were associated with R, P, and D parameters in schizophrenia. The sample with bipolar disorder would show cognitive deficits and feedback abnormalities to a lesser extent than individuals with schizophrenia. Negative feedback sensitivity demonstrated greater deficit in both samples. Idiosyncratic cognitive requirements in the WCST might introduce confusion when supposing model-free reinforcement learning. Negative symptoms of schizophrenia were related to lower feedback sensitivity and less goal-directed patterns of choice. Copyright © 2016 Elsevier Inc. All rights reserved.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Trajectories of Reinforcement Sensitivity during Adolescence and Risk for Substance Use
ERIC Educational Resources Information Center
Colder, Craig R.; Hawk, Larry W., Jr.; Lengua, Liliana J.; Wiezcorek, William; Eiden, Rina Das; Read, Jennifer P.
2013-01-01
Developmental neuroscience models suggest that changes in responsiveness to incentives contribute to increases in adolescent risk behavior, including substance use. Trajectories of sensitivity to reward (SR) and sensitivity to punishment (SP) were examined and tested as predictors of escalation of early substance use in a community sample of…
An evaluation of selected in silico models for the assessment ...
Skin sensitization remains an important endpoint for consumers, manufacturers and regulators. Although the development of alternative approaches to assess skin sensitization potential has been extremely active over many years, the implication of regulations such as REACH and the Cosmetics Directive in EU has provided a much stronger impetus to actualize this research into practical tools for decision making. Thus there has been considerable focus on the development, evaluation, and integration of alternative approaches for skin sensitization hazard and risk assessment. This includes in silico approaches such as (Q)SARs and expert systems. This study aimed to evaluate the predictive performance of a selection of in silico models and then to explore whether combining those models led to an improvement in accuracy. A dataset of 473 substances that had been tested in the local lymph node assay (LLNA) was compiled. This comprised 295 sensitizers and 178 non-sensitizers. Four freely available models were identified - 2 statistical models VEGA and MultiCASE model A33 for skin sensitization (MCASE A33) from the Danish National Food Institute and two mechanistic models Toxtree’s Skin sensitization Reaction domains (Toxtree SS Rxn domains) and the OASIS v1.3 protein binding alerts for skin sensitization from the OECD Toolbox (OASIS). VEGA and MCASE A33 aim to predict sensitization as a binary score whereas the mechanistic models identified reaction domains or structura
Thurman, Steven M.; Davey, Pinakin Gunvant; McCray, Kaydee Lynn; Paronian, Violeta; Seitz, Aaron R.
2016-01-01
Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity. PMID:28006065
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Manning, Laurens; Laman, Moses; Rosanas-Urgell, Anna; Turlach, Berwin; Aipit, Susan; Bona, Cathy; Warrell, Jonathan; Siba, Peter; Mueller, Ivo; Davis, Timothy M E
2012-01-01
Although rapid diagnostic tests (RDTs) have practical advantages over light microscopy (LM) and good sensitivity in severe falciparum malaria in Africa, their utility where severe non-falciparum malaria occurs is unknown. LM, RDTs and polymerase chain reaction (PCR)-based methods have limitations, and thus conventional comparative malaria diagnostic studies employ imperfect gold standards. We assessed whether, using Bayesian latent class models (LCMs) which do not require a reference method, RDTs could safely direct initial anti-infective therapy in severe ill children from an area of hyperendemic transmission of both Plasmodium falciparum and P. vivax. We studied 797 Papua New Guinean children hospitalized with well-characterized severe illness for whom LM, RDT and nested PCR (nPCR) results were available. For any severe malaria, the estimated prevalence was 47.5% with RDTs exhibiting similar sensitivity and negative predictive value (NPV) to nPCR (≥96.0%). LM was the least sensitive test (87.4%) and had the lowest NPV (89.7%), but had the highest specificity (99.1%) and positive predictive value (98.9%). For severe falciparum malaria (prevalence 42.9%), the findings were similar. For non-falciparum severe malaria (prevalence 6.9%), no test had the WHO-recommended sensitivity and specificity of >95% and >90%, respectively. RDTs were the least sensitive (69.6%) and had the lowest NPV (96.7%). RDTs appear a valuable point-of-care test that is at least equivalent to LM in diagnosing severe falciparum malaria in this epidemiologic situation. None of the tests had the required sensitivity/specificity for severe non-falciparum malaria but the number of false-negative RDTs in this group was small.
An epidermal equivalent assay for identification and ranking potency of contact sensitizers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, Susan, E-mail: S.Gibbs@VUMC.nl; Corsini, Emanuela; Spiekstra, Sander W.
2013-10-15
The purpose of this study was to explore the possibility of combining the epidermal equivalent (EE) potency assay with the assay which assesses release of interleukin-18 (IL-18) to provide a single test for identification and classification of skin sensitizing chemicals, including chemicals of low water solubility or stability. A protocol was developed using different 3D-epidermal models including in house VUMC model, epiCS® (previously EST1000™), MatTek EpiDerm™ and SkinEthic™ RHE and also the impact of different vehicles (acetone:olive oil 4:1, 1% DMSO, ethanol, water) was investigated. Following topical exposure for 24 h to 17 contact allergens and 13 non-sensitizers a robustmore » increase in IL-18 release was observed only after exposure to contact allergens. A putative prediction model is proposed from data obtained from two laboratories yielding 95% accuracy. Correlating the in vitro EE sensitizer potency data, which assesses the chemical concentration which results in 50% cytotoxicity (EE-EC{sub 50}) with human and animal data showed a superior correlation with human DSA{sub 05} (μg/cm{sup 2}) data (Spearman r = 0.8500; P value (two-tailed) = 0.0061) compared to LLNA data (Spearman r = 0.5968; P value (two-tailed) = 0.0542). DSA{sub 05} = induction dose per skin area that produces a positive response in 5% of the tested population Also a good correlation was observed for release of IL-18 (SI-2) into culture supernatants with human DSA{sub 05} data (Spearman r = 0.8333; P value (two-tailed) = 0.0154). This easily transferable human in vitro assay appears to be very promising, but additional testing of a larger chemical set with the different EE models is required to fully evaluate the utility of this assay and to establish a definitive prediction model. - Highlights: • A potential epidermal equivalent assay to label and classify sensitizers • Il-18 release distinguishes sensitizers from non sensitizers • IL-18 release can rank sensitizer potency • EC50 (chemical concentration causing 50% decrease in cell viability) ranks potency • In vitro: human DSA{sub 05} correlation is better than in vitro: LLNA correlation.« less
Choi, William; Tong, Xiuli; Singh, Leher
2017-01-01
This study investigated how Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity among Cantonese children who learned English as a second language (ESL). Five-hundred-and-sixteen second-to-third grade Cantonese ESL children were tested on their Cantonese lexical tone sensitivity, English lexical stress sensitivity, general auditory sensitivity, and working memory. Structural equation modeling revealed that Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity both directly, and indirectly through the mediation of general auditory sensitivity, in which the direct pathway had a larger relative contribution to English lexical stress sensitivity than the indirect pathway. These results suggest that the tone-stress association might be accounted for by joint phonological and acoustic processes that underlie lexical tone and lexical stress perception. PMID:28408898
Acute toxicity prediction to threatened and endangered ...
Evaluating contaminant sensitivity of threatened and endangered (listed) species and protectiveness of chemical regulations often depends on toxicity data for commonly tested surrogate species. The U.S. EPA’s Internet application Web-ICE is a suite of Interspecies Correlation Estimation (ICE) models that can extrapolate species sensitivity to listed taxa using least-squares regressions of the sensitivity of a surrogate species and a predicted taxon (species, genus, or family). Web-ICE was expanded with new models that can predict toxicity to over 250 listed species. A case study was used to assess protectiveness of genus and family model estimates derived from either geometric mean or minimum taxa toxicity values for listed species. Models developed from the most sensitive value for each chemical were generally protective of the most sensitive species within predicted taxa, including listed species, and were more protective than geometric means models. ICE model estimates were compared to HC5 values derived from Species Sensitivity Distributions for the case study chemicals to assess protectiveness of the two approaches. ICE models provide robust toxicity predictions and can generate protective toxicity estimates for assessing contaminant risk to listed species. Reporting on the development and optimization of ICE models for listed species toxicity estimation
Willming, Morgan M; Lilavois, Crystal R; Barron, Mace G; Raimondo, Sandy
2016-10-04
Evaluating contaminant sensitivity of threatened and endangered (listed) species and protectiveness of chemical regulations often depends on toxicity data for commonly tested surrogate species. The U.S. EPA's Internet application Web-ICE is a suite of Interspecies Correlation Estimation (ICE) models that can extrapolate species sensitivity to listed taxa using least-squares regressions of the sensitivity of a surrogate species and a predicted taxon (species, genus, or family). Web-ICE was expanded with new models that can predict toxicity to over 250 listed species. A case study was used to assess protectiveness of genus and family model estimates derived from either geometric mean or minimum taxa toxicity values for listed species. Models developed from the most sensitive value for each chemical were generally protective of the most sensitive species within predicted taxa, including listed species, and were more protective than geometric means models. ICE model estimates were compared to HC5 values derived from Species Sensitivity Distributions for the case study chemicals to assess protectiveness of the two approaches. ICE models provide robust toxicity predictions and can generate protective toxicity estimates for assessing contaminant risk to listed species.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Alcohol consumption promotes mammary tumor growth and insulin sensitivity
Hong, Jina; Holcomb, Valerie B.; Tekle, Samrawit A.; Fan, Betty; Núñez, Nomelí P.
2010-01-01
Epidemiological data show that in women, alcohol has a beneficial effect by increasing insulin sensitivity but also a deleterious effect by increasing breast cancer risk. These effects have not been shown concurrently in an animal model of breast cancer. Our objective is to identify a mouse model of breast cancer whereby alcohol increases insulin sensitivity and promotes mammary tumorigenesis. Our results from the glucose tolerance test and the homeostasis model assessment show that alcohol consumption improved insulin sensitivity. However, alcohol-consuming mice developed larger mammary tumors and developed them earlier than water-consuming mice. In vitro results showed that alcohol exposure increased the invasiveness of breast cancer cells in a dose-dependent manner. Thus, this animal model, an in vitro model of breast cancer, may be used to elucidate the mechanism(s) by which alcohol affects breast cancer. PMID:20202743
Robust Bounded Influence Tests in Linear Models
1988-11-01
sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Theurer, M E; White, B J; Larson, R L; Schroeder, T C
2015-03-01
Bovine respiratory disease is an economically important syndrome in the beef industry, and diagnostic accuracy is important for optimal disease management. The objective of this study was to determine whether improving diagnostic sensitivity or specificity was of greater economic value at varied levels of respiratory disease prevalence by using Monte Carlo simulation. Existing literature was used to populate model distributions of published sensitivity, specificity, and performance (ADG, carcass weight, yield grade, quality grade, and mortality risk) differences among calves based on clinical respiratory disease status. Data from multiple cattle feeding operations were used to generate true ranges of respiratory disease prevalence and associated mortality. Input variables were combined into a single model that calculated estimated net returns for animals by diagnostic category (true positive, false positive, false negative, and true negative) based on the prevalence, sensitivity, and specificity for each iteration. Net returns for each diagnostic category were multiplied by the proportion of animals in each diagnostic category to determine group profitability. Apparent prevalence was categorized into low (<15%) and high (≥15%) groups. For both apparent prevalence categories, increasing specificity created more rapid, positive change in net returns than increasing sensitivity. Improvement of diagnostic specificity, perhaps through a confirmatory test interpreted in series or pen-level diagnostics, can increase diagnostic value more than improving sensitivity. Mortality risk was the primary driver for net returns. The results from this study are important for determining future research priorities to analyze diagnostic techniques for bovine respiratory disease and provide a novel way for modeling diagnostic tests.
A Mobile Health Application to Predict Postpartum Depression Based on Machine Learning.
Jiménez-Serrano, Santiago; Tortajada, Salvador; García-Gómez, Juan Miguel
2015-07-01
Postpartum depression (PPD) is a disorder that often goes undiagnosed. The development of a screening program requires considerable and careful effort, where evidence-based decisions have to be taken in order to obtain an effective test with a high level of sensitivity and an acceptable specificity that is quick to perform, easy to interpret, culturally sensitive, and cost-effective. The purpose of this article is twofold: first, to develop classification models for detecting the risk of PPD during the first week after childbirth, thus enabling early intervention; and second, to develop a mobile health (m-health) application (app) for the Android(®) (Google, Mountain View, CA) platform based on the model with best performance for both mothers who have just given birth and clinicians who want to monitor their patient's test. A set of predictive models for estimating the risk of PPD was trained using machine learning techniques and data about postpartum women collected from seven Spanish hospitals. An internal evaluation was carried out using a hold-out strategy. An easy flowchart and architecture for designing the graphical user interface of the m-health app was followed. Naive Bayes showed the best balance between sensitivity and specificity as a predictive model for PPD during the first week after delivery. It was integrated into the clinical decision support system for Android mobile apps. This approach can enable the early prediction and detection of PPD because it fulfills the conditions of an effective screening test with a high level of sensitivity and specificity that is quick to perform, easy to interpret, culturally sensitive, and cost-effective.
Paulmichl, Katharina; Hatunic, Mensud; Højlund, Kurt; Jotic, Aleksandra; Krebs, Michael; Mitrakou, Asimina; Porcellati, Francesca; Tura, Andrea; Bergsten, Peter; Forslund, Anders; Manell, Hannes; Widhalm, Kurt; Weghuber, Daniel; Anderwald, Christian-Heinz
2016-09-01
The triglyceride-to-HDL cholesterol (TG/HDL-C) ratio was introduced as a tool to estimate insulin resistance, because circulating lipid measurements are available in routine settings. Insulin, C-peptide, and free fatty acids are components of other insulin-sensitivity indices but their measurement is expensive. Easier and more affordable tools are of interest for both pediatric and adult patients. Study participants from the Relationship Between Insulin Sensitivity and Cardiovascular Disease [43.9 (8.3) years, n = 1260] as well as the Beta-Cell Function in Juvenile Diabetes and Obesity study cohorts [15 (1.9) years, n = 29] underwent oral-glucose-tolerance tests and euglycemic clamp tests for estimation of whole-body insulin sensitivity and calculation of insulin sensitivity indices. To refine the TG/HDL ratio, mathematical modeling was applied including body mass index (BMI), fasting TG, and HDL cholesterol and compared to the clamp-derived M-value as an estimate of insulin sensitivity. Each modeling result was scored by identifying insulin resistance and correlation coefficient. The Single Point Insulin Sensitivity Estimator (SPISE) was compared to traditional insulin sensitivity indices using area under the ROC curve (aROC) analysis and χ(2) test. The novel formula for SPISE was computed as follows: SPISE = 600 × HDL-C(0.185)/(TG(0.2) × BMI(1.338)), with fasting HDL-C (mg/dL), fasting TG concentrations (mg/dL), and BMI (kg/m(2)). A cutoff value of 6.61 corresponds to an M-value smaller than 4.7 mg · kg(-1) · min(-1) (aROC, M:0.797). SPISE showed a significantly better aROC than the TG/HDL-C ratio. SPISE aROC was comparable to the Matsuda ISI (insulin sensitivity index) and equal to the QUICKI (quantitative insulin sensitivity check index) and HOMA-IR (homeostasis model assessment-insulin resistance) when calculated with M-values. The SPISE seems well suited to surrogate whole-body insulin sensitivity from inexpensive fasting single-point blood draw and BMI in white adolescents and adults. © 2016 American Association for Clinical Chemistry.
Suvorov, Anton; Jensen, Nicholas O; Sharkey, Camilla R; Fujimoto, M Stanley; Bodily, Paul; Wightman, Haley M Cahill; Ogden, T Heath; Clement, Mark J; Bybee, Seth M
2017-03-01
Gene duplication plays a central role in adaptation to novel environments by providing new genetic material for functional divergence and evolution of biological complexity. Several evolutionary models have been proposed for gene duplication to explain how new gene copies are preserved by natural selection, but these models have rarely been tested using empirical data. Opsin proteins, when combined with a chromophore, form a photopigment that is responsible for the absorption of light, the first step in the phototransduction cascade. Adaptive gene duplications have occurred many times within the animal opsins' gene family, leading to novel wavelength sensitivities. Consequently, opsins are an attractive choice for the study of gene duplication evolutionary models. Odonata (dragonflies and damselflies) have the largest opsin repertoire of any insect currently known. Additionally, there is tremendous variation in opsin copy number between species, particularly in the long-wavelength-sensitive (LWS) class. Using comprehensive phylotranscriptomic and statistical approaches, we tested various evolutionary models of gene duplication. Our results suggest that both the blue-sensitive (BS) and LWS opsin classes were subjected to strong positive selection that greatly weakens after multiple duplication events, a pattern that is consistent with the permanent heterozygote model. Due to the immense interspecific variation and duplicability potential of opsin genes among odonates, they represent a unique model system to test hypotheses regarding opsin gene duplication and diversification at the molecular level. © 2016 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Kauwe, M. G.; Zhou, S. -X.; Medlyn, B. E.
Future climate change has the potential to increase drought in many regions of the globe, making it essential that land surface models (LSMs) used in coupled climate models realistically capture the drought responses of vegetation. Recent data syntheses show that drought sensitivity varies considerably among plants from different climate zones, but state-of-the-art LSMs currently assume the same drought sensitivity for all vegetation. We tested whether variable drought sensitivities are needed to explain the observed large-scale patterns of drought impact on the carbon, water and energy fluxes. We implemented data-driven drought sensitivities in the Community Atmosphere Biosphere Land Exchange (CABLE) LSMmore » and evaluated alternative sensitivities across a latitudinal gradient in Europe during the 2003 heatwave. The model predicted an overly abrupt onset of drought unless average soil water potential was calculated with dynamic weighting across soil layers. We found that high drought sensitivity at the most mesic sites, and low drought sensitivity at the most xeric sites, was necessary to accurately model responses during drought. Furthermore, our results indicate that LSMs will over-estimate drought impacts in drier climates unless different sensitivity of vegetation to drought is taken into account.« less
De Kauwe, M. G.; Zhou, S. -X.; Medlyn, B. E.; ...
2015-12-21
Future climate change has the potential to increase drought in many regions of the globe, making it essential that land surface models (LSMs) used in coupled climate models realistically capture the drought responses of vegetation. Recent data syntheses show that drought sensitivity varies considerably among plants from different climate zones, but state-of-the-art LSMs currently assume the same drought sensitivity for all vegetation. We tested whether variable drought sensitivities are needed to explain the observed large-scale patterns of drought impact on the carbon, water and energy fluxes. We implemented data-driven drought sensitivities in the Community Atmosphere Biosphere Land Exchange (CABLE) LSMmore » and evaluated alternative sensitivities across a latitudinal gradient in Europe during the 2003 heatwave. The model predicted an overly abrupt onset of drought unless average soil water potential was calculated with dynamic weighting across soil layers. We found that high drought sensitivity at the most mesic sites, and low drought sensitivity at the most xeric sites, was necessary to accurately model responses during drought. Furthermore, our results indicate that LSMs will over-estimate drought impacts in drier climates unless different sensitivity of vegetation to drought is taken into account.« less
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig
2015-07-01
We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.
Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John
2012-01-01
BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
ERIC Educational Resources Information Center
Burton, Emily; Stice, Eric; Seeley, John R.
2004-01-01
The stress-buffering model posits that social support mitigates the relation between negative life events and onset of depression, but prospective studies have provided little support for this assertion. The authors sought to provide a more sensitive test of this model by addressing certain methodological and statistical limitations of past…
Cobelli, Claudio; Dalla Man, Chiara; Toffolo, Gianna; Basu, Rita; Vella, Adrian; Rizza, Robert
2014-01-01
The simultaneous assessment of insulin action, secretion, and hepatic extraction is key to understanding postprandial glucose metabolism in nondiabetic and diabetic humans. We review the oral minimal method (i.e., models that allow the estimation of insulin sensitivity, β-cell responsivity, and hepatic insulin extraction from a mixed-meal or an oral glucose tolerance test). Both of these oral tests are more physiologic and simpler to administer than those based on an intravenous test (e.g., a glucose clamp or an intravenous glucose tolerance test). The focus of this review is on indices provided by physiological-based models and their validation against the glucose clamp technique. We discuss first the oral minimal model method rationale, data, and protocols. Then we present the three minimal models and the indices they provide. The disposition index paradigm, a widely used β-cell function metric, is revisited in the context of individual versus population modeling. Adding a glucose tracer to the oral dose significantly enhances the assessment of insulin action by segregating insulin sensitivity into its glucose disposal and hepatic components. The oral minimal model method, by quantitatively portraying the complex relationships between the major players of glucose metabolism, is able to provide novel insights regarding the regulation of postprandial metabolism. PMID:24651807
Evaluation of Four Diagnostic Tests for Insulin Dysregulation in Adult Light-Breed Horses.
Dunbar, L K; Mielnicki, K A; Dembek, K A; Toribio, R E; Burns, T A
2016-05-01
Several tests have been evaluated in horses for quantifying insulin dysregulation to support a diagnosis of equine metabolic syndrome. Comparing the performance of these tests in the same horses will provide clarification of their accuracy in the diagnosis of equine insulin dysregulation. The aim of this study was to evaluate the agreement between basal serum insulin concentrations (BIC), the oral sugar test (OST), the combined glucose-insulin test (CGIT), and the frequently sampled insulin-modified intravenous glucose tolerance test (FSIGTT). Twelve healthy, light-breed horses. Randomized, prospective study. Each of the above tests was performed on 12 horses. Minimal model analysis of the FSIGTT was considered the reference standard and classified 7 horses as insulin resistant (IR) and 5 as insulin sensitive (IS). In contrast, BIC and OST assessment using conventional cut-off values classified all horses as IS. Kappa coefficients, measuring agreement among BIC, OST, CGIT, and FSIGTT were poor to fair. Sensitivity of the CGIT (positive phase duration of the glucose curve >45 minutes) was 85.7% and specificity was 40%, whereas CGIT ([insulin]45 >100 μIU/mL) sensitivity and specificity were 28.5% and 100%, respectively. Area under the glucose curve (AUCg0-120 ) was significantly correlated among the OST, CGIT, and FSIGTT, but Bland-Altman method and Lin's concordance coefficient showed a lack of agreement. Current criteria for diagnosis of insulin resistance using BIC and the OST are highly specific but lack sensitivity. The CGIT displayed better sensitivity and specificity, but modifications may be necessary to improve agreement with minimal model analysis. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Chaparral Model 60 Infrasound Sensor Evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slad, George William; Merchant, Bion J.
2016-03-01
Sandia National Laboratories has tested and evaluated an infrasound sensor, the Model 60 manufactured by Chaparral Physics, a Division of Geophysical Institute of the University of Alaska, Fairbanks. The purpose of the infrasound sensor evaluation was to determine a measured sensitivity, transfer function, power, self-noise, dynamic range, and seismic sensitivity. The Model 60 infrasound sensor is a new sensor developed by Chaparral Physics intended to be a small, rugged sensor used in more flexible application conditions.
Increased risk of horse sensitization in southwestern Iranian horse riders.
Moghtaderi, Mozhgan; Farjadian, Shirin; Hosseini, Zeynab; Raayat, Alireza
2015-01-01
The aim of this study has been to investigate the frequency of sensitization to horse allergens and clinical symptoms in horse riders. A total of 42 horse riders and 50 healthy individuals were examined by means of skin prick tests for a panel of horse and common animal allergens, and pulmonary function tests were done by spirometry. The rate of sensitization to horse allergens was 31% as proven by the skin prick test in horse riders whereas horse sensitization was not seen in the control group. Occupational allergy symptoms were reported by 19 horse riders. Two horse riders with no history of clinical symptoms showed positive skin reactions to horse allergens. To decrease the high risk of occupational sensitization among horse riders, workplace conditions should be improved to reduce the load of airborne horse allergens. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
This poster will present a modeling and mapping assessment of landscape sensitivity to non-point source pollution as applied to a hierarchy of catchment drainages in the Coastal Plain of the state of North Carolina. Analysis of the subsurface residence time of water in shallow a...
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-07-06
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
While sensitivity of model species to common toxicants has been addressed, a systematic analysis of inter-species variability for different test types, modes of action and species is as yet lacking. Hence, the aim of the present study was to identify similarities and differences ...
Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.
Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y
2018-04-01
We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.
Statistical sensitivity analysis of a simple nuclear waste repository model
NASA Astrophysics Data System (ADS)
Ronen, Y.; Lucius, J. L.; Blow, E. M.
1980-06-01
A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Peters, Adam; Lofts, Stephen; Merrington, Graham; Brown, Bruce; Stubblefield, William; Harlow, Keven
2011-11-01
Ecotoxicity tests were performed with fish, invertebrates, and algae to investigate the effect of water quality parameters on Mn toxicity. Models were developed to describe the effects of Mn as a function of water quality. Calcium (Ca) has a protective effect on Mn toxicity for both fish and invertebrates, and magnesium (Mg) also provides a protective effect for invertebrates. Protons have a protective effect on Mn toxicity to algae. The models derived are consistent with models of the toxicity of other metals to aquatic organisms in that divalent cations can act as competitors to Mn toxicity in fish and invertebrates, and protons act as competitors to Mn toxicity in algae. The selected models are able to predict Mn toxicity to the test organisms to within a factor of 2 in most cases. Under low-pH conditions invertebrates are the most sensitive taxa, and under high-pH conditions algae are most sensitive. The point at which algae become more sensitive than invertebrates depends on the Ca concentration and occurs at higher pH when Ca concentrations are low, because of the sensitivity of invertebrates under these conditions. Dissolved organic carbon concentrations have very little effect on the toxicity of Mn to aquatic organisms. Copyright © 2011 SETAC.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.
2015-09-01
A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.
1993-01-01
The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay [mu] [yields] e [gamma] to a sensitivity which, measured in terms of the branching ratio, BR = [[mu] [yields] e [gamma
NASA Technical Reports Server (NTRS)
Zhang, D.; Anthes, R. A.
1982-01-01
A one-dimensional, planetary boundary layer (PBL) model is presented and verified using April 10, 1979 SESAME data. The model contains two modules to account for two different regimes of turbulent mixing. Separate parameterizations are made for stable and unstable conditions, with a predictive slab model for surface temperature. Atmospheric variables in the surface layer are calculated with a prognostic model, with moisture included in the coupled surface/PBL modeling. Sensitivity tests are performed for factors such as moisture availability, albedo, surface roughness, and thermal capacity, and a 24 hr simulation is summarized for day and night conditions. The comparison with the SESAME data comprises three hour intervals, using a time-dependent geostrophic wind. Close correlations were found with daytime conditions, but not in nighttime thermal structure, while the turbulence was faithfully predicted. Both geostrophic flow and surface characteristics were shown to have significant effects on the model predictions
NASA Astrophysics Data System (ADS)
Saksala, Timo
2016-10-01
This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.
Validation of insulin sensitivity and secretion indices derived from the liquid meal tolerance test.
Maki, Kevin C; Kelley, Kathleen M; Lawless, Andrea L; Hubacher, Rachel L; Schild, Arianne L; Dicklin, Mary R; Rains, Tia M
2011-06-01
A liquid meal tolerance test (LMTT) has been proposed as a useful alternative to more labor-intensive methods of assessing insulin sensitivity and secretion. This substudy, conducted at the conclusion of a randomized, double-blind crossover trial, compared insulin sensitivity indices from a LMTT (Matsuda insulin sensitivity index [MISI] and LMTT disposition index [LMTT-DI]) with indices derived from minimal model analysis of results from the insulin-modified intravenous glucose tolerance test (IVGTT) (insulin sensitivity index [S(I)] and disposition index [DI]). Participants included men (n = 16) and women (n = 8) without diabetes but with increased abdominal adiposity (waist circumference ≥102 cm and ≥89 cm, respectively) and mean age of 48.9 years. The correlation between S(I) and the MISI was 0.776 (P < 0.0001). The respective associations between S(I) and MISI with waist circumference (r = -0.445 and -0.554, both P < 0.05) and body mass index were similar (r = -0.500 and -0.539, P < 0.05). The correlation between DI and LMTT-DI was 0.604 (P = 0.002). These results indicate that indices of insulin sensitivity and secretion derived from the LMTT correlate well with those from the insulin-modified IVGTT with minimal model analysis, suggesting that they may be useful for application in clinical and population studies of glucose homeostasis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
Salinas, María; Flores, Emilio; López-Garrigós, Maite; Díaz, Elena; Esteban, Patricia; Leiva-Salinas, Carlos
2017-01-01
To apply a continual improvement model to develop an algorithm for ordering laboratory tests to diagnose acute pancreatitis in a hospital emergency department. Quasi-experimental study using the continual improvement model (plan, do, check, adjust cycles) in 2 consecutive phases in emergency patients: amylase and lipase results were used to diagnose acute pancreatitis in the first phase; in the second, only lipase level was first determined; amylase testing was then ordered only if the lipase level fell within a certain range. We collected demographic data, number amylase and lipase tests ordered and the findings, final diagnosis, and the results of a questionnaire to evaluate satisfaction with emergency care. The first phase included 517 patients, of whom 20 had acute pancreatitis. For amylase testing sensitivity was 0.70; specificity, 0.85; positive predictive value (PPV), 17; and negative predictive value (NPV), 0.31. For lipase testing these values were sensitivity, 0.85; specificity, 0.96; PPV, 21, and NPV, 0.16. When both tests were done, sensitivity was 0.85; specificity 0.99; PPV, 85; and NPV, 0.15. The second phase included data for 4815 patients, 118 of whom had acute pancreatitis. The measures of diagnostic yield for the new algorithm were sensitivity, 0.92; specificity, 0.98; PPV, 46; and NPV, 0.08]. This study demonstrates a process for developing a protocol to guide laboratory testing in acute pancreatitis in the hospital emergency department. The proposed sequence of testing for pancreatic enzyme levels can be effective for diagnosing acute pancreatitis in patients with abdominal pain.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Wang, Haili; Tso, Victor; Wong, Clarence; Sadowski, Dan; Fedorak, Richard N
2014-03-20
Adenomatous polyps are precursors of colorectal cancer; their detection and removal is the goal of colon cancer screening programs. However, fecal-based methods identify patients with adenomatous polyps with low levels of sensitivity. The aim or this study was to develop a highly accurate, prototypic, proof-of-concept, spot urine-based diagnostic test using metabolomic technology to distinguish persons with adenomatous polyps from those without polyps. Prospective urine and stool samples were collected from 876 participants undergoing colonoscopy examination in a colon cancer screening program, from April 2008 to October 2009 at the University of Alberta. Colonoscopy reference standard identified 633 participants with no colonic polyps and 243 with colonic adenomatous polyps. One-dimensional nuclear magnetic resonance spectra of urine metabolites were analyzed to define a diagnostic metabolomic profile for colonic adenomas. A urine metabolomic diagnostic test for colonic adenomatous polyps was established using 67% of the samples (un-blinded training set) and validated using the other 33% of the samples (blinded testing set). The urine metabolomic diagnostic test's specificity and sensitivity were compared with those of fecal-based tests. Using a two-component, orthogonal, partial least-squares model of the metabolomic profile, the un-blinded training set identified patients with colonic adenomatous polyps with 88.9% sensitivity and 50.2% specificity. Validation using the blinded testing set confirmed sensitivity and specificity values of 82.7% and 51.2%, respectively. Sensitivities of fecal-based tests to identify colonic adenomas ranged from 2.5 to 11.9%. We describe a proof-of-concept spot urine-based metabolomic diagnostic test that identifies patients with colonic adenomatous polyps with a greater level of sensitivity (83%) than fecal-based tests.
Arkusz, Joanna; Stępnik, Maciej; Sobala, Wojciech; Dastych, Jarosław
2010-11-10
The aim of this study was to find differentially regulated genes in THP-1 monocytic cells exposed to sensitizers and nonsensitizers and to investigate if such genes could be reliable markers for an in vitro predictive method for the identification of skin sensitizing chemicals. Changes in expression of 35 genes in the THP-1 cell line following treatment with chemicals of different sensitizing potential (from nonsensitizers to extreme sensitizers) were assessed using real-time PCR. Verification of 13 candidate genes by testing a large number of chemicals (an additional 22 sensitizers and 8 nonsensitizers) revealed that prediction of contact sensitization potential was possible based on evaluation of changes in three genes: IL8, HMOX1 and PAIMP1. In total, changes in expression of these genes allowed correct detection of sensitization potential of 21 out of 27 (78%) test sensitizers. The gene expression levels inside potency groups varied and did not allow estimation of sensitization potency of test chemicals. Results of this study indicate that evaluation of changes in expression of proposed biomarkers in THP-1 cells could be a valuable model for preliminary screening of chemicals to discriminate an appreciable majority of sensitizers from nonsensitizers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Sensitivity of a Simulated Derecho Event to Model Initial Conditions
NASA Astrophysics Data System (ADS)
Wang, Wei
2014-05-01
Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.
The role of gas heat pumps in electric DSM
NASA Astrophysics Data System (ADS)
Fulmer, M.; Hughes, P. J.
1993-05-01
Natural gas-fired heat pumps (GHP's), an emerging technology, may offer environmental economic, and energy benefits relative to standard and advanced space conditioning equipment now on the market. This paper describes an analysis of GHP's for residential space heating, and cooling relative to major competing technologies under an Integrated Resource (IRP) framework. Our study models a hypothetical GHP rebate program using conditions typical of the Great Lakes region. The analysis is performed for a base scenario with sensitivity cases. In the base scenario, the GHP program is cost-effective according to the societal test, total resource cost test (TRC), and the participant test, but is not cost-effective according to the non-participant test. The sensitivity analyses indicate that the results for the TRC test are most sensitive to the season in which electric demand peaks and the technology against which the GHP's are competing, and are less sensitive to changes in the marginal administrative costs. The modeled GHP program would save 900 million kWh over the life of the program and reduce peak load by about 100 MW in winter and about 135 MW in summer. We estimate all of the GHP's in service (both GHP's of program participants and nonparticipants) in the case study region would save 1,900 million kWh and reduce summer peak load by over 350 MW.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
6DOF Testing of the SLS Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Geohagan, Kevin; Bernard, Bill; Oliver, T. Emerson; Leggett, Jared; Strickland, Dennis
2018-01-01
The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). Because the navigation architecture for the SLS Block 1 vehicle is a purely inertial system, the accuracy of the achieved orbit relative to mission requirements is very sensitive to initial alignment accuracy. The assessment of this sensitivity and many others via simulation is a part of the SLS Model-Based Design and Model-Based Requirements approach. As a part of the aforementioned, 6DOF Monte Carlo simulation is used in large part to develop and demonstrate verification of program requirements. To facilitate this and the GN&C flight software design process, an SLS-Program-controlled Design Math Model (DMM) of the SLS INS was developed by the SLS Navigation Team. The SLS INS model implements all of the key functions of the hardware-namely, GCA, inertial navigation, and FDIR (Fault Detection, Isolation, and Recovery)-in support of SLS GN&C design requirements verification. Despite the strong sensitivity to initial alignment, GCA accuracy requirements were not verified by test due to program cost and schedule constraints. Instead, the system relies upon assessments performed using the SLS INS model. In order to verify SLS program requirements by analysis, the SLS INS model is verified and validated against flight hardware. In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.
Barns, Gareth L; Thornton, Steven F; Wilson, Ryan D
2015-01-01
Heterogeneity in aquifer permeability, which creates paths of varying mass flux and spatially complex contaminant plumes, can complicate the interpretation of contaminant fate and transport in groundwater. Identifying the location of high mass flux paths is critical for the reliable estimation of solute transport parameters and design of groundwater remediation schemes. Dipole flow tracer tests (DFTTs) and push-pull tests (PPTs) are single well forced-gradient tests which have been used at field-scale to estimate aquifer hydraulic and transport properties. In this study, the potential for PPTs and DFTTs to resolve the location of layered high- and low-permeability layers in granular porous media was investigated with a pseudo 2-D bench-scale aquifer model. Finite element fate and transport modelling was also undertaken to identify appropriate set-ups for in situ tests to determine the type, magnitude, location and extent of such layered permeability contrasts at the field-scale. The characteristics of flow patterns created during experiments were evaluated using fluorescent dye imaging and compared with the breakthrough behaviour of an inorganic conservative tracer. The experimental results show that tracer breakthrough during PPTs is not sensitive to minor permeability contrasts for conditions where there is no hydraulic gradient. In contrast, DFTTs are sensitive to the type and location of permeability contrasts in the host media and could potentially be used to establish the presence and location of high or low mass flux paths. Numerical modelling shows that the tracer peak breakthrough time and concentration in a DFTT is sensitive to the magnitude of the permeability contrast (defined as the permeability of the layer over the permeability of the bulk media) between values of 0.01-20. DFTTs are shown to be more sensitive to deducing variations in the contrast, location and size of aquifer layered permeability contrasts when a shorter central packer is used. However, larger packer sizes are more likely to be practical for field-scale applications, with fewer tests required to characterise a given aquifer section. The sensitivity of DFTTs to identify layered permeability contrasts was not affected by test flow rate. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India
NASA Astrophysics Data System (ADS)
Kumar, M.; Raghubanshi, A. S.
2011-08-01
A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.
Patlewicz, Grace Y; Basketter, David A; Pease, Camilla K Smith; Wilson, Karen; Wright, Zoe M; Roberts, David W; Bernard, Guillaume; Arnau, Elena Giménez; Lepoittevin, Jean-Pierre
2004-02-01
Fragrance substances represent a very diverse group of chemicals; a proportion of them are associated with the ability to cause allergic reactions in the skin. Efforts to find substitute materials are hindered by the need to undertake animal testing for determining both skin sensitization hazard and potency. One strategy to avoid such testing is through an understanding of the relationships between chemical structure and skin sensitization, so-called structure-activity relationships. In recent work, we evaluated 2 groups of fragrance chemicals -- saturated aldehydes and alpha,beta-unsaturated aldehydes. Simple quantitative structure-activity relationship (QSAR) models relating the EC3 values [derived from the local lymph node assay (LLNA)] to physicochemical properties were developed for both sets of aldehydes. In the current study, we evaluated an additional group of carbonyl-containing compounds to test the predictive power of the developed QSARs and to extend their scope. The QSAR models were used to predict EC3 values of 10 newly selected compounds. Local lymph node assay data generated for these compounds demonstrated that the original QSARs were fairly accurate, but still required improvement. Development of these QSAR models has provided us with a better understanding of the potential mechanisms of action for aldehydes, and hence how to avoid or limit allergy. Knowledge generated from this work is being incorporated into new/improved rules for sensitization in the expert toxicity prediction system, deductive estimation of risk from existing knowledge (DEREK).
How Victim Sensitivity leads to Uncooperative Behavior via Expectancies of Injustice
Maltese, Simona; Baumert, Anna; Schmitt, Manfred J.; MacLeod, Colin
2016-01-01
According to the Sensitivity-to-mean-intentions model, dispositional victim sensitivity involves a suspicious mindset that is activated by situational cues and guides subsequent information processing and behavior like a schema. Study 1 tested whether victim-sensitive persons are more prone to form expectancies of injustice in ambiguous situations and whether these expectancies mediate the relationship between victim sensitivity and cooperation behavior in a trust game. Results show an indirect effect of victim sensitivity on cooperation after unfair treatment (vs. control condition), mediated by expectancies of injustice. In Study 2 we directly manipulated the tendency to form expectancies of injustice in ambiguous situations to test for causality. Results confirmed that the readiness to expect unjust outcomes led to lower cooperation, compared to a control condition. These findings provide direct evidence that expectancy tendencies are implicated in elevated victim sensitivity and are of theoretical and practical relevance. PMID:26793163
Variation of a test's sensitivity and specificity with disease prevalence.
Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M
2013-08-06
Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.
Saad, M F; Anderson, R L; Laws, A; Watanabe, R M; Kades, W W; Chen, Y D; Sands, R E; Pei, D; Savage, P J; Bergman, R N
1994-09-01
An insulin-modified frequently sampled intravenous glucose tolerance test (FSIGTT) with minimal model analysis was compared with the glucose clamp in 11 subjects with normal glucose tolerance (NGT), 20 with impaired glucose tolerance (IGT), and 24 with non-insulin-dependent diabetes mellitus (NIDDM). The insulin sensitivity index (SI) was calculated from FSIGTT using 22- and 12-sample protocols (SI(22) and SI(12), respectively). Insulin sensitivity from the clamp was expressed as SI(clamp) and SIP(clamp). Minimal model parameters were similar when calculated with SI(22) and SI(12). SI could not be distinguished from 0 in approximately 50% of diabetic patients with either protocol. SI(22) correlated significantly with SI(clamp) in the whole group (r = 0.62), and in the NGT (r = 0.53), IGT (r = 0.48), and NIDDM (r = 0.41) groups (P < 0.05 for each). SI(12) correlated significantly with SI(clamp) in the whole group (r = 0.55, P < 0.001) and in the NGT (r = 0.53, P = 0.046) and IGT (r = 0.58, P = 0.008) but not NIDDM (r = 0.30, P = 0.085) groups. When SI(22), SI(clamp), and SIP(clamp) were expressed in the same units, SI(22) was 66 +/- 5% (mean +/- SE) and 50 +/- 8% lower than SI(clamp) and SIP(clamp), respectively. Thus, minimal model analysis of the insulin-modified FSIGTT provides estimates of insulin sensitivity that correlate significantly with those from the glucose clamp. The correlation was weaker, however, in NIDDM. The insulin-modified FSIGTT can be used as a simple test for assessment of insulin sensitivity in population studies involving nondiabetic subjects. Additional studies are needed before using this test routinely in patients with NIDDM.
Jung, Daun; Che, Jeong-Hwan; Lim, Kyung-Min; Chun, Young-Jin; Heo, Yong; Seok, Seung Hyeok
2016-09-01
In vitro testing methods for classifying sensitizers could be valuable alternatives to in vivo sensitization testing using animal models, such as the murine local lymph node assay (LLNA) and the guinea pig maximization test (GMT), but there remains a need for in vitro methods that are more accurate and simpler to distinguish skin sensitizers from non-sensitizers. Thus, the aim of our study was to establish an in vitro assay as a screening tool for detecting skin sensitizers using the human keratinocyte cell line, HaCaT. HaCaT cells were exposed to 16 relevant skin sensitizers and 6 skin non-sensitizers. The highest dose used was the dose causing 75% cell viability (CV75) that we determined by an MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay. The levels of extracellular production of interleukin-1α (IL-1α) and IL-6 were measured. The sensitivity of IL-1α was 63%, specificity was 83% and accuracy was 68%. In the case of IL-6, sensitivity: 69%, specificity: 83% and accuracy: 73%. Thus, this study suggests that measuring extracellular production of pro-inflammatory cytokines IL-1α and IL-6 by human HaCaT cells may potentially classify skin sensitizers from non-sensitizers. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Buisman, Leander R; Luime, Jolanda J; Oppe, Mark; Hazes, Johanna M W; Rutten-van Mölken, Maureen P M H
2016-06-10
There is a lack of information about the sensitivity, specificity and costs new diagnostic tests should have to improve early diagnosis of rheumatoid arthritis (RA). Our objective was to explore the early cost-effectiveness of various new diagnostic test strategies in the workup of patients with inflammatory arthritis (IA) at risk of having RA. A decision tree followed by a patient-level state transition model, using data from published literature, cohorts and trials, was used to evaluate diagnostic test strategies. Alternative tests were assessed as add-on to or replacement of the ACR/EULAR 2010 RA classification criteria for all patients and for intermediate-risk patients. Tests included B-cell gene expression (sensitivity 0.60, specificity 0.90, costs €150), MRI (sensitivity 0.90, specificity 0.60, costs €756), IL-6 serum level (sensitivity 0.70, specificity 0.53, costs €50) and genetic assay (sensitivity 0.40, specificity 0.85, costs €750). Patients with IA at risk of RA were followed for 5 years using a societal perspective. Guideline treatment was assumed using tight controlled treatment based on DAS28; if patients had a DAS28 >3.2 at 12 months or later patients could be eligible for starting biological drugs. The outcome was expressed in incremental cost-effectiveness ratios (€2014 per quality-adjusted life year (QALY) gained) and headroom. The B-cell test was the least expensive strategy when used as an add-on and as replacement in intermediate-risk patients, making it the dominant strategy, as it has better health outcomes and lower costs. As add-on for all patients, the B-cell test was also the most cost-effective test strategy. When using a willingness-to-pay threshold of €20,000 per QALY gained, the IL-6 and MRI strategies were not cost-effective, except as replacement. A genetic assay was not cost-effective in any strategy. Probabilistic sensitivity analysis revealed that the B-cell test was consistently superior in all strategies. When performing univariate sensitivity analysis for intermediate-risk patients, specificity and DAS28 in the B-cell add-on strategy, and DAS28 and sensitivity in the MRI add-on strategy had the largest impact on the cost-effectiveness. This early cost-effectiveness analysis indicated that new tests to diagnose RA are most likely to be cost-effective when the tests are used as an add-on in intermediate-risk patients, and have high specificity, and the test costs should not be higher than €200-€300.
Barreto, Rafael E; Narváez, Javier; Sepúlveda, Natalia A; Velásquez, Fabián C; Díaz, Sandra C; López, Myriam Consuelo; Reyes, Patricia; Moncada, Ligia I
2017-09-01
Public health programs for the control of soil-transmitted helminthiases require valid diagnostic tests for surveillance and parasitic control evaluation. However, there is currently no agreement about what test should be used as a gold standard for the diagnosis of hookworm infection. Still, in presence of concurrent data for multiple tests it is possible to use statistical models to estimate measures of test performance and prevalence. The aim of this study was to estimate the diagnostic accuracy of five parallel tests (direct microscopic examination, Kato-Katz, Harada-Mori, modified Ritchie-Frick, and culture in agar plate) to detect hookworm infections in a sample of school-aged children from a rural area in Colombia. We used both, a frequentist approach, and Bayesian latent class models to estimate the sensitivity and specificity of five tests for hookworm detection, and to estimate the prevalence of hookworm infection in absence of a Gold Standard. The Kato-Katz and agar plate methods had an overall agreement of 95% and kappa coefficient of 0.76. Different models estimated a sensitivity between 76% and 92% for the agar plate technique, and 52% to 87% for the Kato-Katz technique. The other tests had lower sensitivity. All tests had specificity between 95% and 98%. The prevalence estimated by the Kato-Katz and Agar plate methods for different subpopulations varied between 10% and 14%, and was consistent with the prevalence estimated from the combination of all tests. The Harada-Mori, Ritchie-Frick and direct examination techniques resulted in lower and disparate prevalence estimates. Bayesian approaches assuming imperfect specificity resulted in lower prevalence estimates than the frequentist approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Data book for 12.5-inch diameter SRB thermal model water flotation test; 1.29 psia, series P022
NASA Technical Reports Server (NTRS)
Allums, S. L.
1974-01-01
Data acquired from tests conducted to determine how thermal conditions affect SRB (Space Shuttle Solid Rocket Booster) flotation at a scaled pressure of 1.29 psia are presented. Included are acceleration, pressure, and temperature data recorded from initial water impact to final flotation position using a 12.5-inch diameter thermal model of the SRB. Nineteen valid tests were conducted. These thermal tests indicated the following basic differences relative to the ambient temperature and pressure model tests: (1) more water was taken on board during penetration and (2) model flotation/sinking was temperature sensitive.
Van Norman, Ethan R; Nelson, Peter M; Klingbeil, David A
2017-09-01
Educators need recommendations to improve screening practices without limiting students' instructional opportunities. Repurposing previous years' state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years' state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tura, A; Mari, A; Prikoszovich, T; Pacini, G; Kautzky-Willer, A
2008-08-01
Women with former gestational diabetes mellitus (fGDM) often show defects in both insulin sensitivity and beta-cell function but it is not clear which defect plays the major role or which appears first. This might be because fGDM women are often studied as a unique group and not divided according to their glucose tolerance. Different findings might also be the result of using different tests. Our aim was to study insulin sensitivity and beta-cell function with two independent glucose tolerance tests in fGDM women divided according to their glucose tolerance. A total of 108 fGDM women divided into normal glucose tolerance (IGT; N = 82), impaired glucose metabolism (IGM; N = 20) and overt type 2 diabetes (T2DM; N = 6) groups, and 38 healthy control women (CNT) underwent intravenous (IVGTT) and oral glucose tolerance tests (OGTT). Measurements Insulin sensitivity and beta-cell function were assessed by both the IVGTT and the OGTT. Both tests revealed impaired insulin sensitivity in the normotolerant group compared to controls (IVGTT: 4.2 +/- 0.3 vs. 5.4 +/- 0.4 10(-4) min(-1) (microU/ml)(-1); OGTT: 440 +/- 7 vs. 472 +/- 9 ml min(-1) m(-2)). Conversely, no difference was found in beta-cell function from the IVGTT. However, some parameters of beta-cell function by OGTT modelling analysis were found to be impaired: glucose sensitivity (106 +/- 5 vs. 124 +/- 7 pmol min(-1) m(-2) mm(-1), P = 0.0407) and insulin secretion at 5 mm glucose (168 +/- 9 vs. 206 +/- 10 pmol min(-1) m(-2), P = 0.003). Both insulin sensitivity and beta-cell function are impaired in normotolerant fGDM but the subtle defect in beta-cell function is disclosed only by OGTT modelling analysis.
Prospects for testing Lorentz and CPT symmetry with antiprotons
NASA Astrophysics Data System (ADS)
Vargas, Arnaldo J.
2018-03-01
A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
pH-sensitive niosomes: Effects on cytotoxicity and on inflammation and pain in murine models.
Rinaldi, Federica; Del Favero, Elena; Rondelli, Valeria; Pieretti, Stefano; Bogni, Alessia; Ponti, Jessica; Rossi, François; Di Marzio, Luisa; Paolino, Donatella; Marianecci, Carlotta; Carafa, Maria
2017-12-01
pH-sensitive nonionic surfactant vesicles (niosomes) by polysorbate-20 (Tween-20) or polysorbate-20 derivatized by glycine (added as pH sensitive agent), were developed to deliver Ibuprofen (IBU) and Lidocaine (LID). For the physical-chemical characterization of vesicles (mean size, size distribution, zeta potential, vesicle morphology, bilayer properties and stability) dynamic light scattering (DLS), small angle X-ray scattering and fluorescence studies were performed. Potential cytotoxicity was evaluated on immortalized human keratinocyte cells (HaCaT) and on immortalized mouse fibroblasts Balb/3T3. In vivo antinociceptive activity (formalin test) and anti-inflammatory activity tests (paw edema induced by zymosan) in murine models were performed on drug-loaded niosomes. pH-sensitive niosomes were stable in the presence of 0 and 10% fetal bovine serum, non-cytotoxic and able to modify IBU or LID pharmacological activity in vivo. The synthesis of stimuli responsive surfactant, as an alternative to add pH-sensitive molecules to niosomes, could represent a promising delivery strategy for anesthetic and anti-inflammatory drugs.
Aeroallergen sensitization predicts acute chest syndrome in children with sickle cell anaemia.
Willen, Shaina M; Rodeghier, Mark; Strunk, Robert C; Bacharier, Leonard B; Rosen, Carol L; Kirkham, Fenella J; DeBaun, Michael R; Cohen, Robyn T
2018-02-01
Asthma is associated with higher rates of acute chest syndrome (ACS) and vaso-occlusive pain episodes among children with sickle cell anaemia (SCA). Aeroallergen sensitization is a risk factor for asthma. We hypothesized that aeroallergen sensitization is associated with an increased incidence of hospitalizations for ACS and pain. Participants in a multicentre, longitudinal cohort study, aged 4-18 years with SCA, underwent skin prick testing to ten aeroallergens. ACS and pain episodes were collected from birth until the end of the follow-up period. The number of positive skin tests were tested for associations with prospective rates of ACS and pain. Multivariable models demonstrated additive effects of having positive skin tests on future rates of ACS (incidence rate ratio (IRR) for each positive test 1·23, 95% confidence interval [CI] 1·11-1·36, P < 0·001). Aeroallergen sensitization was not associated with future pain (IRR 1·14, 95%CI 0·97-1·33, P = 0·11). Our study demonstrated that children with SCA and aeroallergen sensitization are at increased risk for future ACS. Future research is needed to determine whether identification of specific sensitizations and allergen avoidance and treatment reduce the risk of ACS for children with SCA. © 2018 John Wiley & Sons Ltd.
The Interplay of Maternal Sensitivity and Toddler Engagement of Mother in Predicting Self-Regulation
ERIC Educational Resources Information Center
Ispa, Jean M.; Su-Russell, Chang; Palermo, Francisco; Carlo, Gustavo
2017-01-01
Using data from the Early Head Start Research and Evaluation Project, a cross-lag mediation model was tested to examine longitudinal relations among low-income mothers' sensitivity; toddlers' engagement of their mothers; and toddler's self-regulation at ages 1, 2, and 3 years (N = 2,958). Age 1 maternal sensitivity predicted self-regulation at…
Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project
NASA Technical Reports Server (NTRS)
Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.;
2013-01-01
Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
Sa-Ngamuang, Chaitawat; Haddawy, Peter; Luvira, Viravarn; Piyaphanee, Watcharapong; Iamsirithaworn, Sopon; Lawpoolsri, Saranath
2018-06-18
Differentiating dengue patients from other acute febrile illness patients is a great challenge among physicians. Several dengue diagnosis methods are recommended by WHO. The application of specific laboratory tests is still limited due to high cost, lack of equipment, and uncertain validity. Therefore, clinical diagnosis remains a common practice especially in resource limited settings. Bayesian networks have been shown to be a useful tool for diagnostic decision support. This study aimed to construct Bayesian network models using basic demographic, clinical, and laboratory profiles of acute febrile illness patients to diagnose dengue. Data of 397 acute undifferentiated febrile illness patients who visited the fever clinic of the Bangkok Hospital for Tropical Diseases, Thailand, were used for model construction and validation. The two best final models were selected: one with and one without NS1 rapid test result. The diagnostic accuracy of the models was compared with that of physicians on the same set of patients. The Bayesian network models provided good diagnostic accuracy of dengue infection, with ROC AUC of 0.80 and 0.75 for models with and without NS1 rapid test result, respectively. The models had approximately 80% specificity and 70% sensitivity, similar to the diagnostic accuracy of the hospital's fellows in infectious disease. Including information on NS1 rapid test improved the specificity, but reduced the sensitivity, both in model and physician diagnoses. The Bayesian network model developed in this study could be useful to assist physicians in diagnosing dengue, particularly in regions where experienced physicians and laboratory confirmation tests are limited.
Kim, Chi Hun; Romberg, Carola; Hvoslef-Eide, Martha; Oomen, Charlotte A; Mar, Adam C; Heath, Christopher J; Berthiaume, Andrée-Anne; Bussey, Timothy J; Saksida, Lisa M
2015-11-01
The hippocampus is implicated in many of the cognitive impairments observed in conditions such as Alzheimer's disease (AD) and schizophrenia (SCZ). Often, mice are the species of choice for models of these diseases and the study of the relationship between brain and behaviour more generally. Thus, automated and efficient hippocampal-sensitive cognitive tests for the mouse are important for developing therapeutic targets for these diseases, and understanding brain-behaviour relationships. One promising option is to adapt the touchscreen-based trial-unique nonmatching-to-location (TUNL) task that has been shown to be sensitive to hippocampal dysfunction in the rat. This study aims to adapt the TUNL task for use in mice and to test for hippocampus-dependency of the task. TUNL training protocols were altered such that C57BL/6 mice were able to acquire the task. Following acquisition, dysfunction of the dorsal hippocampus (dHp) was induced using a fibre-sparing excitotoxin, and the effects of manipulation of several task parameters were examined. Mice could acquire the TUNL task using training optimised for the mouse (experiments 1). TUNL was found to be sensitive to dHp dysfunction in the mouse (experiments 2, 3 and 4). In addition, we observed that performance of dHp dysfunction group was somewhat consistently lower when sample locations were presented in the centre of the screen. This study opens up the possibility of testing both mouse and rat models on this flexible and hippocampus-sensitive touchscreen task.
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine
Yuan, Hua; Huang, Jianping; Cao, Chenzhong
2009-01-01
Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136
The effects of ground hydrology on climate sensitivity to solar constant variations
NASA Technical Reports Server (NTRS)
Chou, S. H.; Curran, R. J.; Ohring, G.
1979-01-01
The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Free Wake Techniques for Rotor Aerodynamic Analylis. Volume 2: Vortex Sheet Models
NASA Technical Reports Server (NTRS)
Tanuwidjaja, A.
1982-01-01
Results of computations are presented using vortex sheets to model the wake and test the sensitivity of the solutions to various assumptions used in the development of the models. The complete codings are included.
NASA Technical Reports Server (NTRS)
Durston, Donald A.; Kmak, Francis J.
2009-01-01
Multiple sonic boom wind tunnel models were tested in the NASA Ames Research Center 9-by 7-Foot Supersonic Wind Tunnel to reestablish related test techniques in this facility. The goal of the testing was to acquire higher fidelity sonic boom signatures with instrumentation that is significantly more sensitive than that used during previous wind tunnel entries and to compare old and new data from established models. Another objective was to perform tunnel-to-tunnel comparisons of data from a Gulfstream sonic boom model tested at the NASA Langley Research Center 4-foot by 4-foot Unitary Plan Wind Tunnel.
Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.
Parameter regionalization of a monthly water balance model for the conterminous United States
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-01-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-07-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali
2016-07-01
A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Modeling Interaction of a Tropical Cyclone with Its Cold Wake
2014-09-01
simulation. Line definitions for the SST responses for the complete (solid) pressure field and two sensitivity tests with no TC pressure effects are... definitions for the SST responses for the complete (solid) pressure field and two sensitivity tests with no TC pressure effects are indicated in the inset...shape below the eyewall region force a dynamic response that tends to offset the negative feedback effect of reduced enthalpy flux. In particular
J.A. O' Donnell; J.W. Harden; A.D. McGuire; V.E. Romanovsky
2011-01-01
In the boreal region, soil organic carbon (OC) dynamics are strongly governed by the interaction between wildfire and permafrost. Using a combination of field measurements, numerical modeling of soil thermal dynamics, and mass-balance modeling of OC dynamics, we tested the sensitivity of soil OC storage to a suite of individual climate factors (air temperature, soil...
Enhancing Sensitivity to Visual Motion.
1980-05-01
for certain amblyopes, repeated testing enhianced sensitivity several fold. Amblyopia refers to any of a class of diseases in which there is a loss in...See SEKULER, 1980 for a full treatment of these models. The predictions for the Simultaneous and Random conditions from the different models are...Psychologia, 18, 35-50. COHEN, L.B. & SALAPATEK, P. Infant perception. From sensation to cognition. New York, Academic Press. CYNADER, M., BERMAN, N
Langley 16- Ft. Transonic Tunnel Pressure Sensitive Paint System
NASA Technical Reports Server (NTRS)
Sprinkle, Danny R.; Obara, Clifford J.; Amer, Tahani R.; Leighty, Bradley D.; Carmine, Michael T.; Sealey, Bradley S.; Burkett, Cecil G.
2001-01-01
This report describes the NASA Langley 16-Ft. Transonic Tunnel Pressure Sensitive Paint (PSP) System and presents results of a test conducted June 22-23, 2000 in the tunnel to validate the PSP system. The PSP system provides global surface pressure measurements on wind tunnel models. The system was developed and installed by PSP Team personnel of the Instrumentation Systems Development Branch and the Advanced Measurement and Diagnostics Branch. A discussion of the results of the validation test follows a description of the system and a description of the test.
Modelling survival: exposure pattern, species sensitivity and uncertainty
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code
NASA Astrophysics Data System (ADS)
Sabotinov, Luben; Chevrier, Patrick
The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
Natsch, Andreas; Emter, Roger; Ellis, Graham
2009-01-01
Tests for skin sensitization are required prior to the market launch of new cosmetic ingredients. Significant efforts are made to replace the current animal tests. It is widely recognized that this cannot be accomplished with a single in vitro test, but that rather the integration of results from different in vitro and in silico assays will be needed for the prediction of the skin sensitization potential of chemicals. This has been proposed as a theoretical scheme so far, but no attempts have been made to use experimental data to prove the validity of this concept. Here we thus try for the first time to fill this widely cited concept with data. To this aim, we integrate and report both novel and literature data on 116 chemicals of known skin sensitization potential on the following parameters: (1) peptide reactivity as a surrogate for protein binding, (2) induction of antioxidant/electrophile responsive element dependent luciferase activity as a cell-based assay; (3) Tissue Metabolism Simulator skin sensitization model in silico prediction; and (4) calculated octanol-water partition coefficient. The results of the in vitro assays were scaled into five classes from 0 to 4 to give an in vitro score and compared to the local lymph node assay (LLNA) data, which were also scaled from 0 to 4 (nonsensitizer/weak/moderate/strong/extreme). Different ways of evaluating these data have been assessed to rate the hazard of chemicals (Cooper statistics) and to also scale their potency. With the optimized model an overall accuracy for predicting sensitizers of 87.9% was obtained. There is a linear correlation between the LLNA score and the in vitro score. However, the correlation needs further improvement as there is still a relatively high variation in the in vitro score between chemicals belonging to the same sensitization potency class.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
Sharma, Nripen S.; Jindal, Rohit; Mitra, Bhaskar; Lee, Serom; Li, Lulu; Maguire, Tim J.; Schloss, Rene; Yarmush, Martin L.
2014-01-01
Skin sensitization remains a major environmental and occupational health hazard. Animal models have been used as the gold standard method of choice for estimating chemical sensitization potential. However, a growing international drive and consensus for minimizing animal usage have prompted the development of in vitro methods to assess chemical sensitivity. In this paper, we examine existing approaches including in silico models, cell and tissue based assays for distinguishing between sensitizers and irritants. The in silico approaches that have been discussed include Quantitative Structure Activity Relationships (QSAR) and QSAR based expert models that correlate chemical molecular structure with biological activity and mechanism based read-across models that incorporate compound electrophilicity. The cell and tissue based assays rely on an assortment of mono and co-culture cell systems in conjunction with 3D skin models. Given the complexity of allergen induced immune responses, and the limited ability of existing systems to capture the entire gamut of cellular and molecular events associated with these responses, we also introduce a microfabricated platform that can capture all the key steps involved in allergic contact sensitivity. Finally, we describe the development of an integrated testing strategy comprised of two or three tier systems for evaluating sensitization potential of chemicals. PMID:24741377
Dogan, Meeshanthini V; Grumbach, Isabella M; Michaelson, Jacob J; Philibert, Robert A
2018-01-01
An improved method for detecting coronary heart disease (CHD) could have substantial clinical impact. Building on the idea that systemic effects of CHD risk factors are a conglomeration of genetic and environmental factors, we use machine learning techniques and integrate genetic, epigenetic and phenotype data from the Framingham Heart Study to build and test a Random Forest classification model for symptomatic CHD. Our classifier was trained on n = 1,545 individuals and consisted of four DNA methylation sites, two SNPs, age and gender. The methylation sites and SNPs were selected during the training phase. The final trained model was then tested on n = 142 individuals. The test data comprised of individuals removed based on relatedness to those in the training dataset. This integrated classifier was capable of classifying symptomatic CHD status of those in the test set with an accuracy, sensitivity and specificity of 78%, 0.75 and 0.80, respectively. In contrast, a model using only conventional CHD risk factors as predictors had an accuracy and sensitivity of only 65% and 0.42, respectively, but with a specificity of 0.89 in the test set. Regression analyses of the methylation signatures illustrate our ability to map these signatures to known risk factors in CHD pathogenesis. These results demonstrate the capability of an integrated approach to effectively model symptomatic CHD status. These results also suggest that future studies of biomaterial collected from longitudinally informative cohorts that are specifically characterized for cardiac disease at follow-up could lead to the introduction of sensitive, readily employable integrated genetic-epigenetic algorithms for predicting onset of future symptomatic CHD.
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
Evaluation of the GARD assay in a blind Cosmetics Europe study.
Johansson, Henrik; Gradin, Robin; Forreryd, Andy; Agemark, Maria; Zeller, Kathrin; Johansson, Angelica; Larne, Olivia; van Vliet, Erwin; Borrebaeck, Carl; Lindstedt, Malin
2017-01-01
Chemical hypersensitivity is an immunological response towards foreign substances, commonly referred to as sensitizers, which gives rise primarily to the clinical symptoms known as allergic contact dermatitis. For the purpose of mitigating risks associated with consumer products, chemicals are screened for sensitizing effects. Historically, such predictive screenings have been performed using animal models. However, due to industrial and regulatory demand, animal models for the purpose of sensitization assessment are being replaced by non-animal testing methods, a global trend that is spreading across industries and market segments. To meet this demand, the Genomic Allergen Rapid Detection (GARD) assay was developed. GARD is a novel, cell-based assay that utilizes the innate recognition of xenobiotic substances by dendritic cells, as measured by a multivariate readout of genomic biomarkers. Following cellular stimulation, chemicals are classified as sensitizers or non-sensitizers based on induced transcriptional profiles. Recently, a number of non-animal methods were comparatively evaluated by Cosmetics Europe, using a coherent and blinded test panel of reference chemicals with human and local lymph node assay data, comprising a wide range of sensitizers and non-sensitizers. The outcome of the GARD assay is presented in this paper. It was demonstrated that GARD is a highly functional assay with a predictive performance of 83% in this Cosmetics Europe dataset. The average accumulated predictive accuracy of GARD across independent datasets was 86% for skin sensitization hazard.
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Farris, Samantha G; Uebelacker, Lisa A; Brown, Richard A; Price, Lawrence H; Desaulniers, Julie; Abrantes, Ana M
2017-12-01
Smoking increases risk of early morbidity and mortality, and risk is compounded by physical inactivity. Anxiety sensitivity (fear of anxiety-relevant somatic sensations) is a cognitive factor that may amplify the subjective experience of exertion (effort) during exercise, subsequently resulting in lower engagement in physical activity. We examined the effect of anxiety sensitivity on ratings of perceived exertion (RPE) and physiological arousal (heart rate) during a bout of exercise among low-active treatment-seeking smokers. Adult daily smokers (n = 157; M age = 44.9, SD = 11.13; 69.4% female) completed the Rockport 1.0 mile submaximal treadmill walk test. RPE and heart rate were assessed during the walk test. Multi-level modeling was used to examine the interactive effect of anxiety sensitivity × time on RPE and on heart rate at five time points during the walk test. There were significant linear and cubic time × anxiety sensitivity effects for RPE. High anxiety sensitivity was associated with greater initial increases in RPE during the walk test, with stabilized ratings towards the last 5 min, whereas low anxiety sensitivity was associated with lower initial increase in RPE which stabilized more quickly. The linear time × anxiety sensitivity effect for heart rate was not significant. Anxiety sensitivity is associated with increasing RPE during moderate-intensity exercise. Persistently rising RPE observed for smokers with high anxiety sensitivity may contribute to the negative experience of exercise, resulting in early termination of bouts of prolonged activity and/or decreased likelihood of future engagement in physical activity.
Ivey, Chris D.; Besser, John M.; Ingersoll, Christopher G.; Wang, Ning; Rogers, Christopher; Raimondo, Sandy; Bauer, Candice R.; Hammer, Edward J.
2017-01-01
Vernal pool fairy shrimp, Branchinecta lynchi, (Branchiopoda; Anostraca) and other fairy shrimp species have been listed as threatened or endangered under the US Endangered Species Act. Because few data exist about the sensitivity of Branchinecta spp. to toxic effects of contaminants, it is difficult to determine whether they are adequately protected by water quality criteria. A series of acute (24-h) lethality/immobilization tests was conducted with 3 species of fairy shrimp (B. lynchi, Branchinecta lindahli, and Thamnocephalus platyurus) and 10 chemicals with varying modes of toxic action: ammonia, potassium, chloride, sulfate, chromium(VI), copper, nickel, zinc, alachlor, and metolachlor. The same chemicals were tested in 48-h tests with other branchiopods (the cladocerans Daphnia magna and Ceriodaphnia dubia) and an amphipod (Hyalella azteca), and in 96-h tests with snails (Physa gyrina and Lymnaea stagnalis). Median effect concentrations (EC50s) for B. lynchi were strongly correlated (r2 = 0.975) with EC50s for the commercially available fairy shrimp species T. platyurus for most chemicals tested. Comparison of EC50s for fairy shrimp and EC50s for invertebrate taxa tested concurrently and with other published toxicity data indicated that fairy shrimp were relatively sensitive to potassium and several trace metals compared with other invertebrate taxa, although cladocerans, amphipods, and mussels had similar broad toxicant sensitivity. Interspecies correlation estimation models for predicting toxicity to fairy shrimp from surrogate species indicated that models with cladocerans and freshwater mussels as surrogates produced the best predictions of the sensitivity of fairy shrimp to contaminants. The results of these studies indicate that fairy shrimp are relatively sensitive to a range of toxicants, but Endangered Species Act-listed fairy shrimp of the genus Branchinecta were not consistently more sensitive than other fairy shrimp taxa. Environ Toxicol Chem 2017;36:797–806. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
Development of a superconducting position sensor for the Satellite Test of the Equivalence Principle
NASA Astrophysics Data System (ADS)
Clavier, Odile Helene
The Satellite Test of the Equivalence Principle (STEP) is a joint NASA/ESA mission that proposes to measure the differential acceleration of two cylindrical test masses orbiting the earth in a drag-free satellite to a precision of 10-18 g. Such an experiment would conceptually reproduce Galileo's tower of Pisa experiment with a much longer time of fall and greatly reduced disturbances. The superconducting test masses are constrained in all degrees of freedom except their axial direction (the sensitive axis) using superconducting bearings. The STEP accelerometer measures the differential position of the masses in their sensitive direction using superconducting inductive pickup coils coupled to an extremely sensitive magnetometer called a DC-SQUID (Superconducting Quantum Interference Device). Position sensor development involves the design, manufacture and calibration of pickup coils that will meet the acceleration sensitivity requirement. Acceleration sensitivity depends on both the displacement sensitivity and stiffness of the position sensor. The stiffness must kept small while maintaining stability of the accelerometer. Using a model for the inductance of the pickup coils versus displacement of the test masses, a computer simulation calculates the sensitivity and stiffness of the accelerometer in its axial direction. This simulation produced a design of pickup coils for the four STEP accelerometers. Manufacture of the pickup coils involves standard photolithography techniques modified for superconducting thin-films. A single-turn pickup coil was manufactured and produced a successful superconducting coil using thin-film Niobium. A low-temperature apparatus was developed with a precision position sensor to measure the displacement of a superconducting plate (acting as a mock test mass) facing the coil. The position sensor was designed to detect five degrees of freedom so that coupling could be taken into account when measuring the translation of the plate relative to the coil. The inductance was measured using a DC-SQUID coupled to the pickup coil. The experimental results agree with the model used in the simulation thereby validating the concept used for the design. The STEP program now has the confidence necessary to design and manufacture a position sensor for the flight accelerometer.
Prospects for testing Lorentz and CPT symmetry with antiprotons.
Vargas, Arnaldo J
2018-03-28
A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Author(s).
Ochodo, Eleanor A; Gopalakrishna, Gowri; Spek, Bea; Reitsma, Johannes B; van Lieshout, Lisette; Polman, Katja; Lamberton, Poppy; Bossuyt, Patrick M M; Leeflang, Mariska M G
2015-03-11
Point-of-care (POC) tests for diagnosing schistosomiasis include tests based on circulating antigen detection and urine reagent strip tests. If they had sufficient diagnostic accuracy they could replace conventional microscopy as they provide a quicker answer and are easier to use. To summarise the diagnostic accuracy of: a) urine reagent strip tests in detecting active Schistosoma haematobium infection, with microscopy as the reference standard; and b) circulating antigen tests for detecting active Schistosoma infection in geographical regions endemic for Schistosoma mansoni or S. haematobium or both, with microscopy as the reference standard. We searched the electronic databases MEDLINE, EMBASE, BIOSIS, MEDION, and Health Technology Assessment (HTA) without language restriction up to 30 June 2014. We included studies that used microscopy as the reference standard: for S. haematobium, microscopy of urine prepared by filtration, centrifugation, or sedimentation methods; and for S. mansoni, microscopy of stool by Kato-Katz thick smear. We included studies on participants residing in endemic areas only. Two review authors independently extracted data, assessed quality of the data using QUADAS-2, and performed meta-analysis where appropriate. Using the variability of test thresholds, we used the hierarchical summary receiver operating characteristic (HSROC) model for all eligible tests (except the circulating cathodic antigen (CCA) POC for S. mansoni, where the bivariate random-effects model was more appropriate). We investigated heterogeneity, and carried out indirect comparisons where data were sufficient. Results for sensitivity and specificity are presented as percentages with 95% confidence intervals (CI). We included 90 studies; 88 from field settings in Africa. The median S. haematobium infection prevalence was 41% (range 1% to 89%) and 36% for S. mansoni (range 8% to 95%). Study design and conduct were poorly reported against current standards. Tests for S. haematobium Urine reagent test strips versus microscopyCompared to microscopy, the detection of microhaematuria on test strips had the highest sensitivity and specificity (sensitivity 75%, 95% CI 71% to 79%; specificity 87%, 95% CI 84% to 90%; 74 studies, 102,447 participants). For proteinuria, sensitivity was 61% and specificity was 82% (82,113 participants); and for leukocyturia, sensitivity was 58% and specificity 61% (1532 participants). However, the difference in overall test accuracy between the urine reagent strips for microhaematuria and proteinuria was not found to be different when we compared separate populations (P = 0.25), or when direct comparisons within the same individuals were performed (paired studies; P = 0.21).When tests were evaluated against the higher quality reference standard (when multiple samples were analysed), sensitivity was marginally lower for microhaematuria (71% vs 75%) and for proteinuria (49% vs 61%). The specificity of these tests was comparable. Antigen assayCompared to microscopy, the CCA test showed considerable heterogeneity; meta-analytic sensitivity estimate was 39%, 95% CI 6% to 73%; specificity 78%, 95% CI 55% to 100% (four studies, 901 participants). Tests for S. mansoni Compared to microscopy, the CCA test meta-analytic estimates for detecting S. mansoni at a single threshold of trace positive were: sensitivity 89% (95% CI 86% to 92%); and specificity 55% (95% CI 46% to 65%; 15 studies, 6091 participants) Against a higher quality reference standard, the sensitivity results were comparable (89% vs 88%) but specificity was higher (66% vs 55%). For the CAA test, sensitivity ranged from 47% to 94%, and specificity from 8% to 100% (4 studies, 1583 participants). Among the evaluated tests for S. haematobium infection, microhaematuria correctly detected the largest proportions of infections and non-infections identified by microscopy.The CCA POC test for S. mansoni detects a very large proportion of infections identified by microscopy, but it misclassifies a large proportion of microscopy negatives as positives in endemic areas with a moderate to high prevalence of infection, possibly because the test is potentially more sensitive than microscopy.
Wang, Le; Devore, Sasha; Delgutte, Bertrand
2013-01-01
Human listeners are sensitive to interaural time differences (ITDs) in the envelopes of sounds, which can serve as a cue for sound localization. Many high-frequency neurons in the mammalian inferior colliculus (IC) are sensitive to envelope-ITDs of sinusoidally amplitude-modulated (SAM) sounds. Typically, envelope-ITD-sensitive IC neurons exhibit either peak-type sensitivity, discharging maximally at the same delay across frequencies, or trough-type sensitivity, discharging minimally at the same delay across frequencies, consistent with responses observed at the primary site of binaural interaction in the medial and lateral superior olives (MSO and LSO), respectively. However, some high-frequency IC neurons exhibit dual types of envelope-ITD sensitivity in their responses to SAM tones, that is, they exhibit peak-type sensitivity at some modulation frequencies and trough-type sensitivity at other frequencies. Here we show that high-frequency IC neurons in the unanesthetized rabbit can also exhibit dual types of envelope-ITD sensitivity in their responses to SAM noise. Such complex responses to SAM stimuli could be achieved by convergent inputs from MSO and LSO onto single IC neurons. We test this hypothesis by implementing a physiologically explicit, computational model of the binaural pathway. Specifically, we examined envelope-ITD sensitivity of a simple model IC neuron that receives convergent inputs from MSO and LSO model neurons. We show that dual envelope-ITD sensitivity emerges in the IC when convergent MSO and LSO inputs are differentially tuned for modulation frequency. PMID:24155013
Experimental evaluation of a recursive model identification technique for type 1 diabetes.
Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E
2009-09-01
A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.
Towards an Analytical Age-Dependent Model of Contrast Sensitivity Functions for an Ageing Society
Joulan, Karine; Brémond, Roland
2015-01-01
The Contrast Sensitivity Function (CSF) describes how the visibility of a grating depends on the stimulus spatial frequency. Many published CSF data have demonstrated that contrast sensitivity declines with age. However, an age-dependent analytical model of the CSF is not available to date. In this paper, we propose such an analytical CSF model based on visual mechanisms, taking into account the age factor. To this end, we have extended an existing model from Barten (1999), taking into account the dependencies of this model's optical and physiological parameters on age. Age-dependent models of the cones and ganglion cells densities, the optical and neural MTF, and optical and neural noise are proposed, based on published data. The proposed age-dependent CSF is finally tested against available experimental data, with fair results. Such an age-dependent model may be beneficial when designing real-time age-dependent image coding and display applications. PMID:26078994
Evaluation of a panel of 28 biomarkers for the non-invasive diagnosis of endometriosis.
Vodolazkaia, A; El-Aalamat, Y; Popovic, D; Mihalyi, A; Bossuyt, X; Kyama, C M; Fassbender, A; Bokor, A; Schols, D; Huskens, D; Meuleman, C; Peeraer, K; Tomassetti, C; Gevaert, O; Waelkens, E; Kasran, A; De Moor, B; D'Hooghe, T M
2012-09-01
At present, the only way to conclusively diagnose endometriosis is laparoscopic inspection, preferably with histological confirmation. This contributes to the delay in the diagnosis of endometriosis which is 6-11 years. So far non-invasive diagnostic approaches such as ultrasound (US), MRI or blood tests do not have sufficient diagnostic power. Our aim was to develop and validate a non-invasive diagnostic test with a high sensitivity (80% or more) for symptomatic endometriosis patients, without US evidence of endometriosis, since this is the group most in need of a non-invasive test. A total of 28 inflammatory and non-inflammatory plasma biomarkers were measured in 353 EDTA plasma samples collected at surgery from 121 controls without endometriosis at laparoscopy and from 232 women with endometriosis (minimal-mild n = 148; moderate-severe n = 84), including 175 women without preoperative US evidence of endometriosis. Surgery was done during menstrual (n = 83), follicular (n = 135) and luteal (n = 135) phases of the menstrual cycle. For analysis, the data were randomly divided into an independent training (n = 235) and a test (n = 118) data set. Statistical analysis was done using univariate and multivariate (logistic regression and least squares support vector machines (LS-SVM) approaches in training- and test data set separately to validate our findings. In the training set, two models of four biomarkers (Model 1: annexin V, VEGF, CA-125 and glycodelin; Model 2: annexin V, VEGF, CA-125 and sICAM-1) analysed in plasma, obtained during the menstrual phase, could predict US-negative endometriosis with a high sensitivity (81-90%) and an acceptable specificity (68-81%). The same two models predicted US-negative endometriosis in the independent validation test set with a high sensitivity (82%) and an acceptable specificity (63-75%). In plasma samples obtained during menstruation, multivariate analysis of four biomarkers (annexin V, VEGF, CA-125 and sICAM-1/or glycodelin) enabled the diagnosis of endometriosis undetectable by US with a sensitivity of 81-90% and a specificity of 63-81% in independent training- and test data set. The next step is to apply these models for preoperative prediction of endometriosis in an independent set of patients with infertility and/or pain without US evidence of endometriosis, scheduled for laparoscopy.
Validation of a school-based amblyopia screening protocol in a kindergarten population.
Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L
2016-08-04
To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.
Development of a noise annoyance sensitivity scale
NASA Technical Reports Server (NTRS)
Bregman, H. L.; Pearson, R. G.
1972-01-01
Examining the problem of noise pollution from the psychological rather than the engineering view, a test of human sensitivity to noise was developed against the criterion of noise annoyance. Test development evolved from a previous study in which biographical, attitudinal, and personality data was collected on a sample of 166 subjects drawn from the adult community of Raleigh. Analysis revealed that only a small subset of the data collected was predictive of noise annoyance. Item analysis yielded 74 predictive items that composed the preliminary noise sensitivity test. This was administered to a sample of 80 adults who later rate the annoyance value of six sounds (equated in terms of peak sound pressure level) presented in a simulated home, living-room environment. A predictive model involving 20 test items was developed using multiple regression techniques, and an item weighting scheme was evaluated.
Predicting coronary artery disease using different artificial neural network models.
Colak, M Cengiz; Colak, Cemil; Kocatürk, Hasan; Sağiroğlu, Seref; Barutçu, Irfan
2008-08-01
Eight different learning algorithms used for creating artificial neural network (ANN) models and the different ANN models in the prediction of coronary artery disease (CAD) are introduced. This work was carried out as a retrospective case-control study. Overall, 124 consecutive patients who had been diagnosed with CAD by coronary angiography (at least 1 coronary stenosis > 50% in major epicardial arteries) were enrolled in the work. Angiographically, the 113 people (group 2) with normal coronary arteries were taken as control subjects. Multi-layered perceptrons ANN architecture were applied. The ANN models trained with different learning algorithms were performed in 237 records, divided into training (n=171) and testing (n=66) data sets. The performance of prediction was evaluated by sensitivity, specificity and accuracy values based on standard definitions. The results have demonstrated that ANN models trained with eight different learning algorithms are promising because of high (greater than 71%) sensitivity, specificity and accuracy values in the prediction of CAD. Accuracy, sensitivity and specificity values varied between 83.63%-100%, 86.46%-100% and 74.67%-100% for training, respectively. For testing, the values were more than 71% for sensitivity, 76% for specificity and 81% for accuracy. It may be proposed that the use of different learning algorithms other than backpropagation and larger sample sizes can improve the performance of prediction. The proposed ANN models trained with these learning algorithms could be used a promising approach for predicting CAD without the need for invasive diagnostic methods and could help in the prognostic clinical decision.
Fingerhuth, Stephanie M; Low, Nicola; Bonhoeffer, Sebastian; Althaus, Christian L
2017-07-26
Antibiotic resistance is threatening to make gonorrhoea untreatable. Point-of-care (POC) tests that detect resistance promise individually tailored treatment, but might lead to more treatment and higher levels of resistance. We investigate the impact of POC tests on antibiotic-resistant gonorrhoea. We used data about the prevalence and incidence of gonorrhoea in men who have sex with men (MSM) and heterosexual men and women (HMW) to calibrate a mathematical gonorrhoea transmission model. With this model, we simulated four clinical pathways for the diagnosis and treatment of gonorrhoea: POC test with (POC+R) and without (POC-R) resistance detection, culture and nucleic acid amplification tests (NAATs). We calculated the proportion of resistant infections and cases averted after 5 years, and compared how fast resistant infections spread in the populations. The proportion of resistant infections after 30 years is lowest for POC+R (median MSM: 0.18%, HMW: 0.12%), and increases for culture (MSM: 1.19%, HMW: 0.13%), NAAT (MSM: 100%, HMW: 99.27%), and POC-R (MSM: 100%, HMW: 99.73%). Per 100 000 persons, NAAT leads to 36 366 (median MSM) and 1228 (median HMW) observed cases after 5 years. Compared with NAAT, POC+R averts more cases after 5 years (median MSM: 3353, HMW: 118). POC tests that detect resistance with intermediate sensitivity slow down resistance spread more than NAAT. POC tests with very high sensitivity for the detection of resistance are needed to slow down resistance spread more than by using culture. POC with high sensitivity to detect antibiotic resistance can keep gonorrhoea treatable longer than culture or NAAT. POC tests without reliable resistance detection should not be introduced because they can accelerate the spread of antibiotic-resistant gonorrhoea.
Philippe, Charlotte; Grégoir, Arnout F; Janssens, Lizanne; Pinceel, Tom; De Boeck, Gudrun; Brendonck, Luc
2017-10-01
Nothobranchius furzeri is a promising model for ecotoxicological research due to the species' short life cycle and the production of drought-resistant eggs. Although the species is an emerging vertebrate fish model for several fundamental as well as applied research domains, its potential for ecotoxicological research has not yet been tested. The aim of this study was to characterise the acute and chronic sensitivity of this species to copper as compared to other model organisms. Effects of both acute and chronic copper exposure were tested on survival, life history and physiological traits. We report a 24h-LC 50 of 53.93µg Cu/L, which is situated within the sensitivity range of other model species such as Brook Trout, Fathead Minnow and Rainbow Trout. Moreover, in the full life cycle exposure, we show that an exposure concentration of 10.27µg/L did not cause acute adverse effects (96h), but did cause mortality after prolonged exposure (3-4 weeks). Also chronic, sublethal effects were observed, such as a reduction in growth rate, delayed maturation and postponed reproduction. Based on our results, we define the NOEC at 6.68µg Cu/L, making N. furzeri more sensitive to copper as compared to Brook Trout and Fathead Minnow. We found stimulatory effects on peak fecundity at subinhibitory levels of copper concentrations (hormesis). Finally, we found indications for detoxifying and copper-excreting mechanisms, demonstrating the ability of the fish to cope with this essential metal, even when exposed to stressful amounts. The successful application of current ecotoxicological protocols on N. furzeri and its sensitivity range comparable to that of other model organisms forms the basis to exploit this species in further ecotoxicological practices. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Numerical considerations in the development and implementation of constitutive models
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Imbrie, P. K.
1985-01-01
Several unified constitutive models were tested in uniaxial form by specifying input strain histories and comparing output stress histories. The purpose of the tests was to evaluate several time integration methods with regard to accuracy, stability, and computational economy. The sensitivity of the models to slight changes in input constants was also investigated. Results are presented for In100 at 1350 F and Hastelloy-X at 1800 F.
Hoyer, A; Kuss, O
2015-05-20
In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta-analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta-binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed-form likelihood, so standard software (e.g., SAS PROC NLMIXED) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples. Copyright © 2015 John Wiley & Sons, Ltd.
Giddings, Jeffrey M; Barber, Ian; Warren-Hicks, William
2009-02-01
In this review we compare the sensitivity of a range of aquatic invertebrate and fish species to gamma-cyhalothrin (GCH), the insecticidally active enantiomer of the synthetic pyrethroid lambda-cyhalothrin (LCH), in single-species laboratory tests and outdoor multi-species ecosystem tests. Species sensitivity distribution curves for GCH gave median HC(5) values of 0.47 ng/L for invertebrates, and 23.7 ng/L for fish, while curves for LCH gave median HC(5) values of 1.05 ng/L and 40.9 ng/L for invertebrates and fish, respectively. A model ecosystem test with GCH gave a community-level no observed effect concentration (NOEC(community)) of 5 ng/L, while model ecosystem tests with LCH gave a NOEC(community) of 10 ng/L. These comparisons between GCH and LCH indicate that the single active enantiomer causes effects at approximately one-half the concentration at which the racemate causes similar effects.
Regier, Dean A; Friedman, Jan M; Marra, Carlo A
2010-05-14
Array genomic hybridization (AGH) provides a higher detection rate than does conventional cytogenetic testing when searching for chromosomal imbalance causing intellectual disability (ID). AGH is more costly than conventional cytogenetic testing, and it remains unclear whether AGH provides good value for money. Decision analytic modeling was used to evaluate the trade-off between costs, clinical effectiveness, and benefit of an AGH testing strategy compared to a conventional testing strategy. The trade-off between cost and effectiveness was expressed via the incremental cost-effectiveness ratio. Probabilistic sensitivity analysis was performed via Monte Carlo simulation. The baseline AGH testing strategy led to an average cost increase of $217 (95% CI $172-$261) per patient and an additional 8.2 diagnoses in every 100 tested (0.082; 95% CI 0.044-0.119). The mean incremental cost per additional diagnosis was $2646 (95% CI $1619-$5296). Probabilistic sensitivity analysis demonstrated that there was a 95% probability that AGH would be cost effective if decision makers were willing to pay $4550 for an additional diagnosis. Our model suggests that using AGH instead of conventional karyotyping for most ID patients provides good value for money. Deterministic sensitivity analysis found that employing AGH after first-line cytogenetic testing had proven uninformative did not provide good value for money when compared to using AGH as first-line testing. Copyright (c) 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Hoffmann, Sebastian
2015-01-01
The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.
An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...
DIETARY VITAMIN A ENHANCES SENSITIVITY OF THE LOCAL LYMPH NODE ASSAY
Murine assays such as the mouse ear swelling test (MEST) and the local lymph node assay (LLNA) are popular alternatives to guinea pig models for the identification of contact sensitizers, yet there has been concern over the effectiveness of these assays to detect weak and moderat...
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
NASA Astrophysics Data System (ADS)
Park, Yoon-Hee; Jeong, Sang Hoon; Yi, Sang Min; Hyeok Choi, Byeong; Kim, Yu-Ri; Kim, In-Kyoung; Kim, Meyoung-Kon; Son, Sang Wook
2011-07-01
The human skin equivalent model (HSEM) is well known as an attractive alternative model for evaluation of dermal toxicity. However, only limited data are available on the usefulness of an HSEM for nanotoxicity testing. This study was designed to investigate cutaneous toxicity of polystyrene and TiO2 nanoparticles using cultured keratinocytes, an HSEM, and an animal model. In addition, we also evaluated the skin sensitization potential of nanoparticles using a local lymph node assay with incorporation of BrdU. Findings from the present study indicate that polystyrene and TiO2 nanoparticles do not induce phototoxicity, acute cutaneous irritation, or skin sensitization. Results from evaluation of the HSEMs correspond well with those from animal models. Our findings suggest that the HSEM might be a useful alternative model for evaluation of dermal nanotoxicity.
NASA Technical Reports Server (NTRS)
Toon, O. B.; Turco, R. P.; Hamill, P.; Kiang, C. S.; Whitten, R. C.
1979-01-01
Sensitivity tests were performed on a one-dimensional, physical-chemical model of the unperturbed stratospheric aerosols, and model calculations were compared with observations. The tests and comparisons suggest that coagulation controls the particle number mixing ratio, although the number of condensation nuclei at the tropopause and the diffusion coefficient at high altitudes are also important. The sulfur gas source strength and the aerosol residence time are much more important than the supply of condensation nuclei in establishing mass and large particle concentrations. The particle size is also controlled mainly by gas supply and residence time. In situ observations of the aerosols and laboratory measurements of aerosols, parameters that can provide further information about the physics and chemistry of the stratosphere and the aerosols found there are provided.
Appleton, D J; Rand, J S; Sunvold, G D
2005-06-01
The objective of this study was to compare simpler indices of insulin sensitivity with the minimal model-derived insulin sensitivity index to identify a simple and reliable alternative method for assessing insulin sensitivity in cats. In addition, we aimed to determine whether this simpler measure or measures showed consistency of association across differing body weights and glucose tolerance levels. Data from glucose tolerance and insulin sensitivity tests performed in 32 cats with varying body weights (underweight to obese), including seven cats with impaired glucose tolerance, were used to assess the relationship between Bergman's minimal model-derived insulin sensitivity index (S(I)), and various simpler measures of insulin sensitivity. The most useful overall predictors of insulin sensitivity were basal plasma insulin concentrations and the homeostasis model assessment (HOMA), which is the product of basal glucose and insulin concentrations divided by 22.5. It is concluded that measurement of plasma insulin concentrations in cats with food withheld for 24 h, in conjunction with HOMA, could be used in clinical research projects and by practicing veterinarians to screen for reduced insulin sensitivity in cats. Such cats may be at increased risk of developing impaired glucose tolerance and type 2 diabetes mellitus. Early detection of these cats would enable preventative intervention programs such as weight reduction, increased physical activity and dietary modifications to be instigated.
NASA Astrophysics Data System (ADS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-09-01
A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.
van't Hoog, Anna H; Cobelens, Frank; Vassall, Anna; van Kampen, Sanne; Dorman, Susan E; Alland, David; Ellner, Jerrold
2013-01-01
High costs are a limitation to scaling up the Xpert MTB/RIF assay (Xpert) for the diagnosis of tuberculosis in resource-constrained settings. A triaging strategy in which a sensitive but not necessarily highly specific rapid test is used to select patients for Xpert may result in a more affordable diagnostic algorithm. To inform the selection and development of particular diagnostics as a triage test we explored combinations of sensitivity, specificity and cost at which a hypothetical triage test will improve affordability of the Xpert assay. In a decision analytical model parameterized for Uganda, India and South Africa, we compared a diagnostic algorithm in which a cohort of patients with presumptive TB received Xpert to a triage algorithm whereby only those with a positive triage test were tested by Xpert. A triage test with sensitivity equal to Xpert, 75% specificity, and costs of US$5 per patient tested reduced total diagnostic costs by 42% in the Uganda setting, and by 34% and 39% respectively in the India and South Africa settings. When exploring triage algorithms with lower sensitivity, the use of an example triage test with 95% sensitivity relative to Xpert, 75% specificity and test costs $5 resulted in similar cost reduction, and was cost-effective by the WHO willingness-to-pay threshold compared to Xpert for all in Uganda, but not in India and South Africa. The gain in affordability of the examined triage algorithms increased with decreasing prevalence of tuberculosis among the cohort. A triage test strategy could potentially improve the affordability of Xpert for TB diagnosis, particularly in low-income countries and with enhanced case-finding. Tests and markers with lower accuracy than desired of a diagnostic test may fall within the ranges of sensitivity, specificity and cost required for triage tests and be developed as such.
Al-Saleh, Ayman; Alazzoni, Ashraf; Al Shalash, Saleh; Ye, Chenglin; Mbuagbaw, Lawrence; Thabane, Lehana; Jolly, Sanjit S.
2014-01-01
Background High-sensitivity cardiac troponin assays have been adopted by many clinical centres worldwide; however, clinicians are uncertain how to interpret the results. We sought to assess the utility of these assays in diagnosing acute myocardial infarction (MI). Methods We carried out a systematic review and meta-analysis of studies comparing high-sensitivity with conventional assays of cardiac troponin levels among adults with suspected acute MI in the emergency department. We searched MEDLINE, EMBASE and Cochrane databases up to April 2013 and used bivariable random-effects modelling to obtain summary parameters for diagnostic accuracy. Results We identified 9 studies that assessed the use of high-sensitivity troponin T assays (n = 9186 patients). The summary sensitivity of these tests in diagnosing acute MI at presentation to the emergency department was estimated to be 0.94 (95% confidence interval [CI] 0.89–0.97); for conventional tests, it was 0.72 (95% CI 0.63–0.79). The summary specificity was 0.73 (95% CI 0.64–0.81) for the high-sensitivity assay compared with 0.95 (95% CI 0.93–0.97) for the conventional assay. The differences in estimates of the summary sensitivity and specificity between the high-sensitivity and conventional assays were statistically significant (p < 0.01). The area under the curve was similar for both tests carried out 3–6 hours after presentation. Three studies assessed the use of high-sensitivity troponin I assays and showed similar results. Interpretation Used at presentation to the emergency department, the high-sensitivity cardiac troponin assay has improved sensitivity, but reduced specificity, compared with the conventional troponin assay. With repeated measurements over 6 hours, the area under the curve is similar for both tests, indicating that the major advantage of the high-sensitivity test is early diagnosis. PMID:25295240
Modeling feeding behavior of swine to detect illness
USDA-ARS?s Scientific Manuscript database
Animal well-being may be improved by detecting disruptions in feeding behavior indicative of challenged animals. The objectives of this study were to 1) develop and optimize an autoregressive model by adjusting sensitivity of the model to detect disruptions in feeding time; 2) test the model on dail...
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
Experimental investigation of an ejector-powered free-jet facility
NASA Technical Reports Server (NTRS)
Long, Mary JO
1992-01-01
NASA Lewis Research Center's (LeRC) newly developed Nozzle Acoustic Test Rig (NATR) is a large free-jet test facility powered by an ejector system. In order to assess the pumping performance of this ejector concept and determine its sensitivity to various design parameters, a 1/5-scale model of the NATR was built and tested prior to the operation of the actual facility. This paper discusses the results of the 1/5-scale model tests and compares them with the findings from the full-scale tests.
Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.
2010-12-01
There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.
NASA Astrophysics Data System (ADS)
Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab
2017-10-01
The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.
Validation of mesoscale models
NASA Technical Reports Server (NTRS)
Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew
1993-01-01
The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.
Numerical analysis of hypersonic turbulent film cooling flows
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Chen, C. P.; Wei, H.
1992-01-01
As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Tang, J.
2014-12-01
We hypothesize that the large observed variability in decomposition temperature sensitivity and carbon use efficiency arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. To test this hypothesis, we developed a numerical model that integrates the Dynamic Energy Budget concept for microbial physiology, microbial trait-based community structure and competition, process-specific thermodynamically based temperature sensitivity, a non-linear mineral sorption isotherm, and enzyme dynamics. We show, because mineral surfaces interact with substrates, enzymes, and microbes, both temperature sensitivity and microbial carbon use efficiency are hysteretic and highly variable. Further, by mimicking the traditional approach to interpreting soil incubation observations, we demonstrate that the conventional labile and recalcitrant substrate characterization for temperature sensitivity is flawed. In a 4 K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker carbon-climate feedbacks than did the static temperature sensitivity and carbon use efficiency model when forced with yearly, daily, and hourly variable temperatures. These results imply that current earth system models likely over-estimate the response of soil carbon stocks to global warming.
NASA Astrophysics Data System (ADS)
Zinszner, Jean-Luc; Erzar, Benjamin; Forquin, Pascal
2017-01-01
Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 103 to 104 s-1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual-Forquin-Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.
Zinszner, Jean-Luc; Erzar, Benjamin; Forquin, Pascal
2017-01-28
Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 10 3 to 10 4 s -1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual-Forquin-Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones.This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'. © 2016 The Author(s).
Erzar, Benjamin
2017-01-01
Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 103 to 104 s−1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual–Forquin–Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956504
Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.
Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk
2018-05-01
Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.
Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164
NASA Astrophysics Data System (ADS)
de Lima Neves Seefelder, Carolina; Mergili, Martin
2016-04-01
We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more conservative than those yielded with the infinite slope stability model. The sensitivity of AUCROC to variations in the geohydraulic parameters remains small as long as the calculated degree of saturation of the soils is sufficient to result in the prediction of a significant amount of landslide release pixels. Due to the poor sensitivity of AUCROC to variations of the geotechnical and geohydraulic parameters it is hard to optimize the parameters by means of statistics. Instead, the results produced with many different combinations of parameters correspond reasonably well with the distribution of the observed landslide release areas, even though they vary considerably in terms of their conservativeness. Considering the uncertainty inherent in all geotechnical and geohydraulic data, and the impossibility to capture the spatial distribution of the parameters by means of laboratory tests in sufficient detail, we conclude that landslide susceptibility maps yielded by catchment-scale physically-based models should not be interpreted in absolute terms. Building on the assumption that our findings are generally valid, we suggest that efforts to develop better strategies for dealing with the uncertainties in the spatial variation of the key parameters should be given priority in future slope stability modelling efforts.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Embedded measures of performance validity using verbal fluency tests in a clinical sample.
Sugarman, Michael A; Axelrod, Bradley N
2015-01-01
The objective of this study was to determine to what extent verbal fluency measures can be used as performance validity indicators during neuropsychological evaluation. Participants were clinically referred for neuropsychological evaluation in an urban-based Veteran's Affairs hospital. Participants were placed into 2 groups based on their objectively evaluated effort on performance validity tests (PVTs). Individuals who exhibited credible performance (n = 431) failed 0 PVTs, and those with poor effort (n = 192) failed 2 or more PVTs. All participants completed the Controlled Oral Word Association Test (COWAT) and Animals verbal fluency measures. We evaluated how well verbal fluency scores could discriminate between the 2 groups. Raw scores and T scores for Animals discriminated between the credible performance and poor-effort groups with 90% specificity and greater than 40% sensitivity. COWAT scores had lower sensitivity for detecting poor effort. A combination of FAS and Animals scores into logistic regression models yielded acceptable group classification, with 90% specificity and greater than 44% sensitivity. Verbal fluency measures can yield adequate detection of poor effort during neuropsychological evaluation. We provide suggested cut points and logistic regression models for predicting the probability of poor effort in our clinical setting and offer suggested cutoff scores to optimize sensitivity and specificity.
Efficient statistical tests to compare Youden index: accounting for contingency correlation.
Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan
2015-04-30
Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.
Simulation of the spatial frequency-dependent sensitivities of Acoustic Emission sensors
NASA Astrophysics Data System (ADS)
Boulay, N.; Lhémery, A.; Zhang, F.
2018-05-01
Typical configurations of nondestructive testing by Acoustic Emission (NDT/AE) make use of multiple sensors positioned on the tested structure for detecting evolving flaws and possibly locating them by triangulation. Sensors positions must be optimized for ensuring global coverage sensitivity to AE events and minimizing their number. A simulator of NDT/AE is under development to provide help with designing testing configurations and with interpreting measurements. A global model performs sub-models simulating the various phenomena taking place at different spatial and temporal scales (crack growth, AE source and radiation, wave propagation in the structure, reception by sensors). In this context, accurate modelling of sensors behaviour must be developed. These sensors generally consist of a cylindrical piezoelectric element of radius approximately equal to its thickness, without damping and bonded to its case. Sensors themselves are bonded to the structure being tested. Here, a multiphysics finite element simulation tool is used to study the complex behaviour of AE sensor. The simulated behaviour is shown to accurately reproduce the high-amplitude measured contributions used in the AE practice.
Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.
Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P
2018-06-01
We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also contributed substantially to micro-environmental sensitivity. Addition of random regressions to the mean model did not reduce heterogeneity in residual variance and that genetic heterogeneity of residual variance was not simply an effect of an incomplete mean model. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Prevalidation of an Acute Inhalation Toxicity Test Using the EpiAirway In Vitro Human Airway Model
Jackson, George R.; Maione, Anna G.; Klausner, Mitchell
2018-01-01
Abstract Introduction: Knowledge of acute inhalation toxicity potential is important for establishing safe use of chemicals and consumer products. Inhalation toxicity testing and classification procedures currently accepted within worldwide government regulatory systems rely primarily on tests conducted in animals. The goal of the current work was to develop and prevalidate a nonanimal (in vitro) test for determining acute inhalation toxicity using the EpiAirway™ in vitro human airway model as a potential alternative for currently accepted animal tests. Materials and Methods: The in vitro test method exposes EpiAirway tissues to test chemicals for 3 hours, followed by measurement of tissue viability as the test endpoint. Fifty-nine chemicals covering a broad range of toxicity classes, chemical structures, and physical properties were evaluated. The in vitro toxicity data were utilized to establish a prediction model to classify the chemicals into categories corresponding to the currently accepted Globally Harmonized System (GHS) and the Environmental Protection Agency (EPA) system. Results: The EpiAirway prediction model identified in vivo rat-based GHS Acute Inhalation Toxicity Category 1–2 and EPA Acute Inhalation Toxicity Category I–II chemicals with 100% sensitivity and specificity of 43.1% and 50.0%, for GHS and EPA acute inhalation toxicity systems, respectively. The sensitivity and specificity of the EpiAirway prediction model for identifying GHS specific target organ toxicity-single exposure (STOT-SE) Category 1 human toxicants were 75.0% and 56.5%, respectively. Corrosivity and electrophilic and oxidative reactivity appear to be the predominant mechanisms of toxicity for the most highly toxic chemicals. Conclusions: These results indicate that the EpiAirway test is a promising alternative to the currently accepted animal tests for acute inhalation toxicity. PMID:29904643
Prevalidation of an Acute Inhalation Toxicity Test Using the EpiAirway In Vitro Human Airway Model.
Jackson, George R; Maione, Anna G; Klausner, Mitchell; Hayden, Patrick J
2018-06-01
Introduction: Knowledge of acute inhalation toxicity potential is important for establishing safe use of chemicals and consumer products. Inhalation toxicity testing and classification procedures currently accepted within worldwide government regulatory systems rely primarily on tests conducted in animals. The goal of the current work was to develop and prevalidate a nonanimal ( in vitro ) test for determining acute inhalation toxicity using the EpiAirway™ in vitro human airway model as a potential alternative for currently accepted animal tests. Materials and Methods: The in vitro test method exposes EpiAirway tissues to test chemicals for 3 hours, followed by measurement of tissue viability as the test endpoint. Fifty-nine chemicals covering a broad range of toxicity classes, chemical structures, and physical properties were evaluated. The in vitro toxicity data were utilized to establish a prediction model to classify the chemicals into categories corresponding to the currently accepted Globally Harmonized System (GHS) and the Environmental Protection Agency (EPA) system. Results: The EpiAirway prediction model identified in vivo rat-based GHS Acute Inhalation Toxicity Category 1-2 and EPA Acute Inhalation Toxicity Category I-II chemicals with 100% sensitivity and specificity of 43.1% and 50.0%, for GHS and EPA acute inhalation toxicity systems, respectively. The sensitivity and specificity of the EpiAirway prediction model for identifying GHS specific target organ toxicity-single exposure (STOT-SE) Category 1 human toxicants were 75.0% and 56.5%, respectively. Corrosivity and electrophilic and oxidative reactivity appear to be the predominant mechanisms of toxicity for the most highly toxic chemicals. Conclusions: These results indicate that the EpiAirway test is a promising alternative to the currently accepted animal tests for acute inhalation toxicity.
Variation of a test’s sensitivity and specificity with disease prevalence
Leeflang, Mariska M.G.; Rutjes, Anne W.S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M.M.
2013-01-01
Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation. PMID:23798453
Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.
Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea
2014-01-01
A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.
West, Caroline; Ploth, David; Fonner, Virginia; Mbwambo, Jessie; Fredrick, Francis; Sweat, Michael
2016-04-01
Noncommunicable diseases are on pace to outnumber infectious disease as the leading cause of death in sub-Saharan Africa, yet many questions remain unanswered with concern toward effective methods of screening for type II diabetes mellitus (DM) in this resource-limited setting. We aim to design a screening algorithm for type II DM that optimizes sensitivity and specificity of identifying individuals with undiagnosed DM, as well as affordability to health systems and individuals. Baseline demographic and clinical data, including hemoglobin A1c (HbA1c), were collected from 713 participants using probability sampling of the general population. We used these data, along with model parameters obtained from the literature, to mathematically model 8 purposed DM screening algorithms, while optimizing the sensitivity and specificity using Monte Carlo and Latin Hypercube simulation. An algorithm that combines risk assessment and measurement of fasting blood glucose was found to be superior for the most resource-limited settings (sensitivity 68%, sensitivity 99% and cost per patient having DM identified as $2.94). Incorporating HbA1c testing improves the sensitivity to 75.62%, but raises the cost per DM case identified to $6.04. The preferred algorithms are heavily biased to diagnose those with more severe cases of DM. Using basic risk assessment tools and fasting blood sugar testing in lieu of HbA1c testing in resource-limited settings could allow for significantly more feasible DM screening programs with reasonable sensitivity and specificity. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Ispa, Jean M; Su-Russell, Chang; Palermo, Francisco; Carlo, Gustavo
2017-03-01
Using data from the Early Head Start Research and Evaluation Project, a cross-lag mediation model was tested to examine longitudinal relations among low-income mothers' sensitivity; toddlers' engagement of their mothers; and toddler's self-regulation at ages 1, 2, and 3 years (N = 2,958). Age 1 maternal sensitivity predicted self-regulation at ages 2 and 3 years, and age 2 engagement of mother mediated the relation between age 1 maternal sensitivity and age 3 self-regulation. Lagged relations from toddler self-regulation at ages 1 and 2 years to later maternal sensitivity were not significant, suggesting stronger influence from mother to toddler than vice versa. Model fit was similar regardless of child gender and depth of family poverty. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Kindling of life stress in bipolar disorder: comparison of sensitization and autonomy models.
Weiss, Rachel B; Stange, Jonathan P; Boland, Elaine M; Black, Shimrit K; LaBelle, Denise R; Abramson, Lyn Y; Alloy, Lauren B
2015-02-01
Research on life stress in bipolar disorder largely fails to account for the possibility of a dynamic relationship between psychosocial stress and episode initiation. The kindling hypothesis (Post, 1992) states that over the course of recurrent affective disorders, there is a weakening temporal relationship between major life stress and episode initiation that could reflect either a progressive sensitization or progressive autonomy to life stress. The present study involved a comprehensive and precise examination of the kindling hypothesis in 102 participants with bipolar II disorder that allowed for a direct comparison of sensitization and autonomy models. Polarity-specific tests were conducted across the continuum of event severity with respect to impact and frequency of life events. Hypotheses were polarity- and event-valence specific and were based on the stress sensitization model. Results were only partially consistent with the sensitization model: Individuals with more prior mood episodes had an increased frequency of minor negative events before depression and of minor positive events before hypomania. However, the number of past episodes did not moderate relationships between life events and time until prospective onset of mood episodes. These results are more consistent with a sensitization than an autonomy model, but several predictions of the sensitization model were not supported. Methodological strengths, limitations, and implications are discussed regarding putative changes in stress reactivity that may occur with repeated exposure to mood episodes in bipolar II disorder. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Control of Wheel/Rail Noise and Vibration
DOT National Transportation Integrated Search
1982-04-01
An analytical model of the generation of wheel/rail noise has been developed and validated through an extensive series of field tests carried out at the Transportation Test Center using the State of the Art Car. A sensitivity analysis has been perfor...
Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis
NASA Astrophysics Data System (ADS)
Kurtulus, Bedri; Flipo, Nicolas
2012-01-01
The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.
Testing for the linearity of responses to multiple anthropogenic climate forcings
NASA Astrophysics Data System (ADS)
Forest, C. E.; Stone, P. H.; Sokolov, A. P.
2001-12-01
To test whether climate forcings are additive, we compare climate model simulations in which anthropogenic forcings are applied individually and in combination. Tests are performed with different values for climate system properties (climate sensitivity and rate of heat uptake by the deep ocean) as well as for different strengths of the net aerosol forcing, thereby testing for the dependence of linearity on these properties. The MIT 2D Land-Ocean Climate Model used in this study consists of a zonally averaged statistical-dynamical atmospheric model coupled to a mixed-layer Q-flux ocean model, with heat anomalies diffused into the deep ocean. Following our previous studies, the anthropogenic forcings are the changes in concentrations of greenhouse gases (1860-1995), sulfate aerosol (1860-1995), and stratospheric and tropospheric ozone (1979-1995). The sulfate aerosol forcing is applied as a surface albedo change. For an aerosol forcing of -1.0 W/m2 and an effective ocean diffusitivity of 2.5 cm2/s, the nonlinearity of the response of global-mean surface temperatures to the combined forcing shows a strong dependence on climate sensitivity. The fractional change in decadal averages ([(Δ TG + Δ TS + Δ TO) - Δ TGSO ]/ Δ TGSO) for the 1986-1995 period compared to pre-industrial times are 0.43, 0.90, and 1.08 with climate sensitivities of 3.0, 4.5, and 6.2 oC, respectively. The values of Δ TGSO for these three cases are 0.52, 0.62, and 0.76 oC. The dependence of linearity on climate system properties, the role of climate system feedbacks, and the implications for the detection of climate system's response to individual forcings will be presented. Details of the model and forcings can be found at http://web.mit.edu/globalchange/www/.
ERIC Educational Resources Information Center
Atkinson, Leslie; Goldberg, Susan; Raval, Vaishali; Pederson, David; Benoit, Diane; Moran, Greg; Poulton, Lori; Myhal, Natalie; Zwiers, Michael; Leung, Eman
2005-01-01
Attachment theorists assume that maternal mental representations influence responsivity, which influences infant attachment security. However, primary studies do not support this mediation model. The authors tested mediation using 2 mother-infant samples and found no evidence of mediation. Therefore, the authors explored sensitivity as a…
Caetano, Ana C; Santa-Cruz, André; Rolanda, Carla
2016-01-01
Background . Rome III criteria add physiological criteria to symptom-based criteria of chronic constipation (CC) for the diagnosis of defecatory disorders (DD). However, a gold-standard test is still lacking and physiological examination is expensive and time-consuming. Aim . Evaluate the usefulness of two low-cost tests-digital rectal examination (DRE) and balloon expulsion test (BET)-as screening or excluding tests of DD. Methods . We performed a systematic search in PUBMED and MEDLINE. We selected studies where constipated patients were evaluated by DRE or BET. Heterogeneity was assessed and random effect models were used to calculate the sensitivity, specificity, and negative predictive value (NPV) of the DRE and the BET. Results . Thirteen studies evaluating BET and four studies evaluating DRE (2329 patients) were selected. High heterogeneity ( I 2 > 80%) among studies was demonstrated. The studies evaluating the BET showed a sensitivity and specificity of 67% and 80%, respectively. Regarding the DRE, a sensitivity of 80% and specificity of 84% were calculated. NPV of 72% for the BET and NPV of 64% for the DRE were estimated. The sensitivity and specificity were similar when we restrict the analysis to studies using Rome criteria to define CC. The BET seems to perform better when a cut-off time of 2 minutes is used and when it is compared with a combination of physiological tests. Considering the DRE, strict criteria seem to improve the sensitivity but not the specificity of the test. Conclusion . Neither of the low-cost tests seems suitable for screening or excluding DD.
Capron, Daniel W.; Allan, Nicholas P.; Ialongo, Nicholas S.; Leen-Feldner, Ellen; Schmidt, Norman B.
2015-01-01
Adolescents with comorbid anxiety and depression are at significantly increased risk of suicide. The recently proposed depression distress amplification model appears to have promise for explaining the relations between anxiety, depression, and suicidality, but it has not been tested in adolescents. Participants were 524 adolescents followed over two years. Baseline data for the current report were collected by trained interviewers while the adolescents were in eighth grade. Data were obtained in the same manner when the adolescents were in tenth grade. Baseline anxiety sensitivity cognitive concerns significantly predicted suicidal ideation two years later, above and beyond baseline suicidal ideation and depression. Further, consistent with the depression distress amplification model, anxiety sensitivity cognitive concerns interacted with depressive symptoms to predict suicidal ideation. This report extends the empirical and theoretical support for a relationship between anxiety sensitivity cognitive concerns and suicidality. PMID:25754194
NASA Technical Reports Server (NTRS)
McLachlan, B. G.; Bell, J. H.; Park, H.; Kennelly, R. A.; Schreiner, J. A.; Smith, S. C.; Strong, J. M.; Gallery, J.; Gouterman, M.
1995-01-01
The pressure-sensitive paint method was used in the test of a high-sweep oblique wing model, conducted in the NASA Ames 9- by 7-ft Supersonic Wind Tunnel. Surface pressure data was acquired from both the luminescent paint and conventional pressure taps at Mach numbers between M = 1.6 and 2.0. In addition, schlieren photographs of the outer flow were used to determine the location of shock waves impinging on the model. The results show that the luminescent pressure-sensitive paint can capture both global and fine features of the static surface pressure field. Comparison with conventional pressure tap data shows good agreement between the two techniques, and that the luminescent paint data can be used to make quantitative measurements of the pressure changes over the model surface. The experiment also demonstrates the practical considerations and limitations that arise in the application of this technique under supersonic flow conditions in large-scale facilities, as well as the directions in which future research is necessary in order to make this technique a more practical wind-tunnel testing tool.
Modified Petri net model sensitivity to workload manipulations
NASA Technical Reports Server (NTRS)
White, S. A.; Mackinnon, D. P.; Lyman, J.
1986-01-01
Modified Petri Nets (MPNs) are investigated as a workload modeling tool. The results of an exploratory study of the sensitivity of MPNs to work load manipulations in a dual task are described. Petri nets have been used to represent systems with asynchronous, concurrent and parallel activities (Peterson, 1981). These characteristics led some researchers to suggest the use of Petri nets in workload modeling where concurrent and parallel activities are common. Petri nets are represented by places and transitions. In the workload application, places represent operator activities and transitions represent events. MPNs have been used to formally represent task events and activities of a human operator in a man-machine system. Some descriptive applications demonstrate the usefulness of MPNs in the formal representation of systems. It is the general hypothesis herein that in addition to descriptive applications, MPNs may be useful for workload estimation and prediction. The results are reported of the first of a series of experiments designed to develop and test a MPN system of workload estimation and prediction. This first experiment is a screening test of MPN model general sensitivity to changes in workload. Positive results from this experiment will justify the more complicated analyses and techniques necessary for developing a workload prediction system.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolutions, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolution, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
Health Insurance: The Trade-Off Between Risk Pooling and Moral Hazard.
1989-12-01
bias comes about because we suppress the intercept term in estimating VFor the power, the test is against 1, - 1. With this transform, the risk...dealing with the same utility function. As one test of whether families behave in the way economic theory suggests, we have also fitted a probit model of...nonparametric alternative to test our results’ sensitivity to the assumption of a normal error in both the theoretical and empirical models of the
Optical Modeling of the Alignment and Test of the NASA James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Howard, Joseph M.; Hayden, Bill; Keski-Kuha, Ritva; Feinberg, Lee
2007-01-01
Optical modeling challenges of the ground alignment plan and optical test and verification of the NASA James Webb Space Telescope are discussed. Issues such as back-out of the gravity sag of light-weighted mirrors, as well as the use of a sparse-aperture auto-collimating flat system are discussed. A walk-through of the interferometer based alignment procedure is summarized, and sensitivities from the sparse aperture wavefront test are included as examples.'
Testing cosmogonic models with gravitational lensing.
Wambsganss, J; Cen, R; Ostriker, J P; Turner, E L
1995-04-14
Gravitational lensing provides a strict test of cosmogonic models because it is directly sensitive to mass inhomogeneities. Detailed numerical propagation of light rays through a universe that has a distribution of inhomogeneities derived from the standard CDM (cold dark matter) scenario, with the aid of massive, fully nonlinear computer simulations, was used to test the model. It predicts that more widely split quasar images should have been seen than were actually found. These and other inconsistencies rule out the Cosmic Background Explorer (COBE)-normalized CDM model with density parameter Omega = 1 and the Hubble constant (H(o)) = 50 kilometers second(-1) megaparsec(-1); but variants of this model might be constructed, which could pass the stringent tests provided by strong gravitational lensing.
Schütte, Moritz; Risch, Thomas; Abdavi-Azar, Nilofar; Boehnke, Karsten; Schumacher, Dirk; Keil, Marlen; Yildiriman, Reha; Jandrasits, Christine; Borodina, Tatiana; Amstislavskiy, Vyacheslav; Worth, Catherine L.; Schweiger, Caroline; Liebs, Sandra; Lange, Martin; Warnatz, Hans- Jörg; Butcher, Lee M.; Barrett, James E.; Sultan, Marc; Wierling, Christoph; Golob-Schwarzl, Nicole; Lax, Sigurd; Uranitsch, Stefan; Becker, Michael; Welte, Yvonne; Regan, Joseph Lewis; Silvestrov, Maxine; Kehler, Inge; Fusi, Alberto; Kessler, Thomas; Herwig, Ralf; Landegren, Ulf; Wienke, Dirk; Nilsson, Mats; Velasco, Juan A.; Garin-Chesa, Pilar; Reinhard, Christoph; Beck, Stephan; Schäfer, Reinhold; Regenbrecht, Christian R. A.; Henderson, David; Lange, Bodo; Haybaeck, Johannes; Keilholz, Ulrich; Hoffmann, Jens; Lehrach, Hans; Yaspo, Marie-Laure
2017-01-01
Colorectal carcinoma represents a heterogeneous entity, with only a fraction of the tumours responding to available therapies, requiring a better molecular understanding of the disease in precision oncology. To address this challenge, the OncoTrack consortium recruited 106 CRC patients (stages I–IV) and developed a pre-clinical platform generating a compendium of drug sensitivity data totalling >4,000 assays testing 16 clinical drugs on patient-derived in vivo and in vitro models. This large biobank of 106 tumours, 35 organoids and 59 xenografts, with extensive omics data comparing donor tumours and derived models provides a resource for advancing our understanding of CRC. Models recapitulate many of the genetic and transcriptomic features of the donors, but defined less complex molecular sub-groups because of the loss of human stroma. Linking molecular profiles with drug sensitivity patterns identifies novel biomarkers, including a signature outperforming RAS/RAF mutations in predicting sensitivity to the EGFR inhibitor cetuximab. PMID:28186126
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
Pred-Skin: A Fast and Reliable Web Application to Assess Skin Sensitization Effect of Chemicals.
Braga, Rodolpho C; Alves, Vinicius M; Muratov, Eugene N; Strickland, Judy; Kleinstreuer, Nicole; Trospsha, Alexander; Andrade, Carolina Horta
2017-05-22
Chemically induced skin sensitization is a complex immunological disease with a profound impact on quality of life and working ability. Despite some progress in developing alternative methods for assessing the skin sensitization potential of chemical substances, there is no in vitro test that correlates well with human data. Computational QSAR models provide a rapid screening approach and contribute valuable information for the assessment of chemical toxicity. We describe the development of a freely accessible web-based and mobile application for the identification of potential skin sensitizers. The application is based on previously developed binary QSAR models of skin sensitization potential from human (109 compounds) and murine local lymph node assay (LLNA, 515 compounds) data with good external correct classification rate (0.70-0.81 and 0.72-0.84, respectively). We also included a multiclass skin sensitization potency model based on LLNA data (accuracy ranging between 0.73 and 0.76). When a user evaluates a compound in the web app, the outputs are (i) binary predictions of human and murine skin sensitization potential; (ii) multiclass prediction of murine skin sensitization; and (iii) probability maps illustrating the predicted contribution of chemical fragments. The app is the first tool available that incorporates quantitative structure-activity relationship (QSAR) models based on human data as well as multiclass models for LLNA. The Pred-Skin web app version 1.0 is freely available for the web, iOS, and Android (in development) at the LabMol web portal ( http://labmol.com.br/predskin/ ), in the Apple Store, and on Google Play, respectively. We will continuously update the app as new skin sensitization data and respective models become available.
NASA Technical Reports Server (NTRS)
Adams, J. J.
1980-01-01
A study of the use of conventional general aviation instruments by general aviation pilots in a six degree of freedom, fixed base simulator was conducted. The tasks performed were tracking a VOR radial and making an ILS approach to landing. A special feature of the tests was that the sensitivity of the displacement indicating instruments (the RMI, CDI, and HSI) was kept constant at values corresponding to 5 n. mi. and 1.25 n. mi. from the station. Both statistical and pilot model analyses of the data were made. The results show that performance in path following improved with increases in display sensitivity up to the highest sensitivity tested. At this maximum test sensitivity, which corresponds to the sensitivity existing at 1.25 n. mi. for the ILS glide slope transmitter, tracking accuracy was no better than it was at 5 n. mi. from the station and the pilot aircraft system exhibited a marked reduction in damping. In some cases, a pilot induced, long period unstable oscillation occurred.
Laskowitz, Daniel T; Kasner, Scott E; Saver, Jeffrey; Remmel, Kerri S; Jauch, Edward C
2009-01-01
One of the significant limitations in the evaluation and management of patients with suspected acute cerebral ischemia is the absence of a widely available, rapid, and sensitive diagnostic test. The objective of the current study was to assess whether a test using a panel of biomarkers might provide useful diagnostic information in the early evaluation of stroke by differentiating patients with cerebral ischemia from other causes of acute neurological deficit. A total of 1146 patients presenting with neurological symptoms consistent with possible stroke were prospectively enrolled at 17 different sites. Timed blood samples were assayed for matrix metalloproteinase 9, brain natriuretic factor, d-dimer, and protein S100beta. A separate cohort of 343 patients was independently enrolled to validate the multiple biomarker model approach. A diagnostic tool incorporating the values of matrix metalloproteinase 9, brain natriuretic factor, d-dimer, and S-100beta into a composite score was sensitive for acute cerebral ischemia. The multivariate model demonstrated modest discriminative capabilities with an area under the receiver operating characteristic curve of 0.76 for hemorrhagic stroke and 0.69 for all stroke (likelihood test P<0.001). When the threshold for the logistic model was set at the first quartile, this resulted in a sensitivity of 86% for detecting all stroke and a sensitivity of 94% for detecting hemorrhagic stroke. Moreover, results were reproducible in a separate cohort tested on a point-of-care platform. These results suggest that a biomarker panel may add valuable and time-sensitive diagnostic information in the early evaluation of stroke. Such an approach is feasible on a point-of-care platform. The rapid identification of patients with suspected stroke would expand the availability of time-limited treatment strategies. Although the diagnostic accuracy of the current panel is clearly imperfect, this study demonstrates the feasibility of incorporating a biomarker based point-of-care algorithm with readily available clinical data to aid in the early evaluation and management of patients at high risk for cerebral ischemia.
Oláh, Viktor; Hepp, Anna; Mészáros, Ilona
2016-05-01
In this study germination of Spirodela polyrhiza (L.) Schleiden (giant duckweed) turions was assessed under cadmium exposure to test applicability of a novel turion-based ecotoxicology method. Floating success of germinating turions, protrusion of the first and subsequent fronds as test endpoints were investigated and compared to results of standard duckweed growth inhibition tests with fronds of the same species. Our results indicate that turions can be used to characterize effects of toxic substances. Initial phase of turion germination (floating up and appearance of the first frond) was less sensitive to Cd treatments than the subsequent frond production. The calculated effective concentrations for growth rates in turion and normal frond tests were similar. Single frond area produced by germinating turions proved to be the most sensitive test endpoint. Single frond area and colony disintegration as additionally measured parameters in normal frond cultures also changed due to Cd treatments but the sensitivity of these parameters was lower than that of growth rates.
Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models
ERIC Educational Resources Information Center
Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles
2012-01-01
This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…
Imaging for Appendicitis: Should Radiation-induced Cancer Risks Affect Modality Selection?
Kiatpongsan, Sorapop; Meng, Lesley; Eisenberg, Jonathan D.; Herring, Maurice; Avery, Laura L.; Kong, Chung Yin
2014-01-01
Purpose To compare life expectancy (LE) losses attributable to three imaging strategies for appendicitis in adults—computed tomography (CT), ultrasonography (US) followed by CT for negative or indeterminate US results, and magnetic resonance (MR) imaging—by using a decision-analytic model. Materials and Methods In this model, for each imaging strategy, LE losses for 20-, 40-, and 65-year-old men and women were computed as a function of five key variables: baseline cohort LE, test performance, surgical mortality, risk of death from delayed diagnosis (missed appendicitis), and LE loss attributable to radiation-induced cancer death. Appendicitis prevalence, test performance, mortality rates from surgery and missed appendicitis, and radiation doses from CT were elicited from the published literature and institutional data. LE loss attributable to radiation exposure was projected by using a separate organ-specific model that accounted for anatomic coverage during a typical abdominopelvic CT examination. One- and two-way sensitivity analyses were performed to evaluate effects of model input variability on results. Results Outcomes across imaging strategies differed minimally—for example, for 20-year-old men, corresponding LE losses were 5.8 days (MR imaging), 6.8 days (combined US and CT), and 8.2 days (CT). This order was sensitive to differences in test performance but was insensitive to variation in radiation-induced cancer deaths. For example, in the same cohort, MR imaging sensitivity had to be 91% at minimum (if specificity were 100%), and MR imaging specificity had to be 62% at minimum (if sensitivity were 100%) to incur the least LE loss. Conversely, LE loss attributable to radiation exposure would need to decrease by 74-fold for combined US and CT, instead of MR imaging, to incur the least LE loss. Conclusion The specific imaging strategy used to diagnose appendicitis minimally affects outcomes. Paradigm shifts to MR imaging owing to concerns over radiation should be considered only if MR imaging test performance is very high. © RSNA, 2014 PMID:24988435
A neonatal swine model of allergy induced by the major food allergen chicken ovomucoid (Gal d 1).
Rupa, Prithy; Hamilton, Korinne; Cirinna, Melissa; Wilkie, Bruce N
2008-01-01
Food allergy is a serious health problem for which a validated outbred large animal model would be useful in comparative investigations of immunopathogenesis and treatment and in testing hypotheses relevant to complex host-environmental interactions in predisposition to and expression of food allergy. To establish a neonatal swine model of IgE-mediated allergy to the egg protein ovomucoid (Ovm) that may mimic human allergy. In order to induce Ovm sensitivity, piglets at days 14, 21 and 35 of age were sensitized by intraperitoneal injection of 100 microg of crude Ovm and cholera toxin (50, 25 or 10 microg). Controls received 50 microg of cholera toxin in phosphate-buffered saline. The animals were challenged orally on day 46 with a mixture of egg white and yoghurt. Outcomes were reported as direct skin tests, clinical signs, IgG-related antibody and passive cutaneous anaphylaxis. Sensitized pigs developed immediate wheal and flare reactions, and after oral challenge, sensitized but not control animals displayed signs of allergic hypersensitivity. Serum IgG-related, Ovm-specific antibodies were detected only in the sensitized pigs and IgE-mediated antibody response to Ovm was confirmed by positive passive cutaneous anaphylaxis reactions induced by sera of sensitized but not by heat-treated sera from Ovm-sensitized pigs or sera of unsensitized control pigs. The present results confirm induction of Ovm-specific allergy in pigs and provide opportunity to investigate allergic predisposition and immunopathogenesis of IgE-induced Ovm allergy using outbred neonatal swine. This may better simulate allergic disease in humans and allow investigation of candidate prophylactic and therapeutic approaches. Copyright 2007 S. Karger AG, Basel.
Test of the stress sensitization model in adolescents following the pipeline explosion.
Shao, Di; Gao, Qing-Ling; Li, Jie; Xue, Jiao-Mei; Guo, Wei; Long, Zhou-Ting; Cao, Feng-Lin
2015-10-01
The stress sensitization model states that early traumatic experiences increase vulnerability to the adverse effects of subsequent stressful life events. This study examined the effect of stress sensitization on development of posttraumatic stress disorder (PTSD) symptoms in Chinese adolescents who experienced the pipeline explosion. A total of 670 participants completed self-administered questionnaires on demographic characteristics and degree of explosion exposure, the Childhood Trauma Questionnaire (CTQ), and the Posttraumatic Stress Disorder Checklist-Civilian Version (PCL-C). Associations among the variables were explored using MANOVA, and main effects and interactions were analyzed. Overall MANOVA tests with the PCL-C indicated significant differences for gender (F=6.86, p=.000), emotional abuse (F=6.79, p=.000), and explosion exposure (F=22.40, p=.000). There were significant interactions between emotional abuse and explosion exposure (F=3.98, p=.008) and gender and explosion exposure (F=2.93, p=.033). Being female, childhood emotional abuse, and a high explosion exposure were associated with high PTSD symptom levels. Childhood emotional abuse moderated the effect of explosion exposure on PTSD symptoms. Thus, stress sensitization influenced the development of PTSD symptoms in Chinese adolescents who experienced the pipeline explosion as predicted by the model. Copyright © 2015 Elsevier Inc. All rights reserved.
Local influence for generalized linear models with missing covariates.
Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G
2009-12-01
In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.
Cytology versus HPV testing for cervical cancer screening in the general population.
Koliopoulos, George; Nyaga, Victoria N; Santesso, Nancy; Bryant, Andrew; Martin-Hirsch, Pierre Pl; Mustafa, Reem A; Schünemann, Holger; Paraskevaidis, Evangelos; Arbyn, Marc
2017-08-10
Cervical cancer screening has traditionally been based on cervical cytology. Given the aetiological relationship between human papillomavirus (HPV) infection and cervical carcinogenesis, HPV testing has been proposed as an alternative screening test. To determine the diagnostic accuracy of HPV testing for detecting histologically confirmed cervical intraepithelial neoplasias (CIN) of grade 2 or worse (CIN 2+), including adenocarcinoma in situ, in women participating in primary cervical cancer screening; and how it compares to the accuracy of cytological testing (liquid-based and conventional) at various thresholds. We performed a systematic literature search of articles in MEDLINE and Embase (1992 to November 2015) containing quantitative data and handsearched the reference lists of retrieved articles. We included comparative test accuracy studies if all women received both HPV testing and cervical cytology followed by verification of the disease status with the reference standard, if positive for at least one screening test. The studies had to include women participating in a cervical cancer screening programme who were not being followed up for previous cytological abnormalities. We completed a 2 x 2 table with the number of true positives (TP), false positives (FP), true negatives (TN), and false negatives for each screening test (HPV test and cytology) used in each study. We calculated the absolute and relative sensitivities and the specificities of the tests for the detection of CIN 2+ and CIN 3+ at various thresholds and computed sensitivity (TP/(TP + TN) and specificity (TN/ (TN + FP) for each test separately. Relative sensitivity and specificity of one test compared to another test were defined as sensitivity of test-1 over sensitivity of test-2 and specificity of test-1 over specificity of test-2, respectively. To assess bias in the studies, we used the Quality Assessment of Diagnostic test Accuracy Studies (QUADAS) tool. We used a bivariate random-effects model for computing pooled accuracy estimates. This model takes into account the within- and between-study variability and the intrinsic correlation between sensitivity and specificity. We included a total of 40 studies in the review, with more than 140,000 women aged between 20 and 70 years old. Many studies were at low risk of bias. There were a sufficient number of included studies with adequate methodology to perform the following test comparisons: hybrid capture 2 (HC2) (1 pg/mL threshold) versus conventional cytology (CC) (atypical squamous cells of undetermined significance (ASCUS)+ and low-grade squamous intraepithelial lesions (LSIL)+ thresholds) or liquid-based cytology (LBC) (ASCUS+ and LSIL+ thresholds), other high-risk HPV tests versus conventional cytology (ASCUS+ and LSIL+ thresholds) or LBC (ASCUS+ and LSIL+ thresholds). For CIN 2+, pooled sensitivity estimates for HC2, CC and LBC (ASCUS+) were 89.9%, 62.5% and 72.9%, respectively, and pooled specificity estimates were 89.9%, 96.6%, and 90.3%, respectively. The results did not differ by age of women (less than or greater than 30 years old), or in studies with verification bias. Accuracy of HC2 was, however, greater in European countries compared to other countries. The results for the sensitivity of the tests were heterogeneous ranging from 52% to 94% for LBC, and 61% to 100% for HC2. Overall, the quality of the evidence for the sensitivity of the tests was moderate, and high for the specificity.The relative sensitivity of HC2 versus CC for CIN 2+ was 1.52 (95% CI: 1.24 to 1.86) and the relative specificity 0.94 (95% CI: 0.92 to 0.96), and versus LBC for CIN 2+ was 1.18 (95% CI: 1.10 to 1.26) and the relative specificity 0.96 (95% CI: 0.95 to 0.97). The relative sensitivity of HC2 versus CC for CIN 3+ was 1.46 (95% CI: 1.12 to 1.91) and the relative specificity 0.95 (95% CI: 0.93 to 0.97). The relative sensitivity of HC2 versus LBC for CIN 3+ was 1.17 (95% CI: 1.07 to 1.28) and the relative specificity 0.96 (95% CI: 0.95 to 0.97). Whilst HPV tests are less likely to miss cases of CIN 2+ and CIN 3+, these tests do lead to more unnecessary referrals. However, a negative HPV test is more reassuring than a negative cytological test, as the cytological test has a greater chance of being falsely negative, which could lead to delays in receiving the appropriate treatment. Evidence from prospective longitudinal studies is needed to establish the relative clinical implications of these tests.
The CHASE laboratory search for chameleon dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab
2010-11-01
A scalar field is a favorite candidate for the particle responsible for dark energy. However, few theoretical means exist that can simultaneously explain the observed acceleration of the Universe and evade tests of gravity. The chameleon mechanism, whereby the properties of a particle depend upon the local environment, is one possible avenue. We present the results of the Chameleon Afterglow Search (CHASE) experiment, a laboratory probe for chameleon dark energy. CHASE marks a significant improvement other searches for chameleons both in terms of its sensitivity to the photon/chameleon coupling as well as its sensitivity to the classes of chameleon darkmore » energy models and standard power-law models. Since chameleon dark energy is virtually indistinguishable from a cosmological constant, CHASE tests dark energy models in a manner not accessible to astronomical surveys.« less
In the U.S., registration of pesticide active ingredients requires a battery of intensive and costly in vivo toxicity tests which utilize large numbers of test animals. These tests use a limited array of model species from various aquatic and terrestrial taxa to represent all pla...
Yang, Shaoyu; Chen, Xueqin; Pan, Yuelong; Yu, Jiekai; Li, Xin; Ma, Shenglin
2016-11-01
The present study aimed to identify potential serum biomarkers for predicting the clinical outcomes of patients with advanced non-small cell lung cancer (NSCLC) treated with epidermal growth factor receptor tyrosine kinase inhibitors (EGFR‑TKIs). A total of 61 samples were collected and analyzed using the integrated approach of magnetic bead‑based weak cation exchange chromatography and matrix‑assisted laser desorption/ionization‑time of flight‑mass spectrometry. The Zhejiang University Protein Chip Data Analysis system was used to identify the protein spectra of patients that are resistant and sensitive to EGFR‑TKIs. Furthermore, a support vector machine was used to construct a predictive model with high accuracy. The model was trained using 46 samples and tested with the remaining 15 samples. In addition, the ExPASy Bioinformatics Resource Portal was used to search potential candidate proteins for peaks in the predictive model. Seven mass/charge (m/z) peaks at 3,264, 9,156, 9,172, 3,964, 9,451, 4,295 and 3,983 Da, were identified as significantly different peaks between the EGFR‑TKIs sensitive and resistant groups. A predictive model was generated with three protein peaks at 3,264, 9,451 and 4,295 Da (m/z). This three‑peak model was capable of distinguishing EGFR‑TKIs resistant patients from sensitive patients with a specificity of 80% and a sensitivity of 80.77%. Furthermore, in a blind test, this model exhibited a high specificity (80%) and a high sensitivity (90%). Apelin, TYRO protein tyrosine kinase‑binding protein and big endothelin‑1 may be potential candidates for the proteins identified with an m/z of 3,264, 9,451 and 4,295 Da, respectively. The predictive model used in the present study may provide an improved understanding of the pathogenesis of NSCLC, and may provide insights for the development of TKI treatment plans tailored to specific patients.
Correlation of SA349/2 helicopter flight-test data with a comprehensive rotorcraft model
NASA Technical Reports Server (NTRS)
Yamauchi, Gloria K.; Heffernan, Ruth M.; Gaubert, Michel
1986-01-01
A comprehensive rotorcraft analysis model was used to predict blade aerodynamic and structural loads for comparison with flight test data. The data were obtained from an SA349/2 helicopter with an advanced geometry rotor. Sensitivity of the correlation to wake geometry, blade dynamics, and blade aerodynamic effects was investigated. Blade chordwise pressure coefficients were predicted for the blade transonic regimes using the model coupled with two finite-difference codes.
Xing, Jian; Burkom, Howard; Tokars, Jerome
2011-12-01
Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.
Cost-effectiveness of the Carbon-13 Urea Breath Test for the Detection of Helicobacter Pylori
Masucci, L; Blackhouse, G; Goeree, R
2013-01-01
Objectives This analysis aimed to evaluate the cost-effectiveness of various testing strategies for Helicobacter pylori in patients with uninvestigated dyspepsia and to calculate the budgetary impact of these tests for the province of Ontario. Data Sources Data on the sensitivity and specificity were obtained from the clinical evidence-based analysis. Resource items were obtained from expert opinion, and costs were applied on the basis of published sources as well as expert opinion. Review Methods A decision analytic model was constructed to compare the costs and outcomes (false-positive results, false-negative results, and misdiagnoses avoided) of the carbon-13 (13C) urea breath test (UBT), enzyme-linked immunosorbent assay (ELISA) serology test, and a 2-step strategy of an ELISA serology test and a confirmatory 13C UBT based on the sensitivity and specificity of the tests and prevalence estimates. Results The 2-step strategy is more costly and more effective than the ELISA serology test and results in $210 per misdiagnosis case avoided. The 13C UBT is dominated by the 2-step strategy, i.e., it is more costly and less effective. The budget impact analysis indicates that it will cost $7.9 million more to test a volume of 129,307 patients with the 13C UBT than with ELISA serology, and $4.7 million more to test these patients with the 2-step strategy. Limitations The clinical studies that were pooled varied in the technique used to perform the breath test and in reference standards used to make comparisons with the breath test. However, these parameters were varied in a sensitivity analysis. The economic model was designed to consider intermediate outcomes only (i.e., misdiagnosed cases) and was not a complete model with final patient outcomes (e.g., quality-adjusted life years). Conclusions Results indicate that the 2-step strategy could be economically attractive for the testing of H. pylori. However, testing with the 2-step strategy will cost the Ministry of Health and Long-Term Care $4.7 million more than with the ELISA serology test. PMID:24228083
An integrative formal model of motivation and decision making: The MGPM*.
Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew
2016-09-01
We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi
2013-11-01
Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Development of the WRF-CO2 4D-Var assimilation system v1.0
NASA Astrophysics Data System (ADS)
Zheng, Tao; French, Nancy H. F.; Baxter, Martin
2018-05-01
Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
NASA Astrophysics Data System (ADS)
Flanagan, S.; Hurtt, G. C.; Fisk, J. P.; Rourke, O.
2012-12-01
A robust understanding of the sensitivity of the pattern, structure, and dynamics of ecosystems to climate, climate variability, and climate change is needed to predict ecosystem responses to current and projected climate change. We present results of a study designed to first quantify the sensitivity of ecosystems to climate through the use of climate and ecosystem data, and then use the results to test the sensitivity of the climate data in a state-of the art ecosystem model. A database of available ecosystem characteristics such as mean canopy height, above ground biomass, and basal area was constructed from sources like the National Biomass and Carbon Dataset (NBCD). The ecosystem characteristics were then paired by latitude and longitude with the corresponding climate characteristics temperature, precipitation, photosynthetically active radiation (PAR) and dew point that were retrieved from the North American Regional Reanalysis (NARR). The average yearly and seasonal means of the climate data, and their associated maximum and minimum values, over the 1979-2010 time frame provided by NARR were constructed and paired with the ecosystem data. The compiled results provide natural patterns of vegetation structure and distribution with regard to climate data. An advanced ecosystem model, the Ecosystem Demography model (ED), was then modified to allow yearly alterations to its mechanistic climate lookup table and used to predict the sensitivities of ecosystem pattern, structure, and dynamics to climate data. The combined ecosystem structure and climate data results were compared to ED's output to check the validity of the model. After verification, climate change scenarios such as those used in the last IPCC were run and future forest structure changes due to climate sensitivities were identified. The results of this study can be used to both quantify and test key relationships for next generation models. The sensitivity of ecosystem characteristics to climate data shown in the database construction and by the model reinforces the need for high-resolution datasets and stresses the importance of understanding and incorporating climate change scenarios into earth system models.
Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach
NASA Astrophysics Data System (ADS)
Wang, Li; Lu, Zhong-Rong
2017-05-01
This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.
Extreme sensitivity in Thermoacoustics
NASA Astrophysics Data System (ADS)
Juniper, Matthew
2017-11-01
In rocket engines and gas turbines, fluctuations in the heat release rate can lock in to acoustic oscillations and grow catastrophically. Nine decades of engine development have shown that these oscillations are difficult to predict but can usually be eliminated with small ad hoc design changes. The difficulty in prediction arises because the oscillations' growth rate is exceedingly sensitive to parameters that cannot always be measured or simulated reliably, which introduces severe systematic error into thermoacoustic models of engines. Passive control strategies then have to be devised through full scale engine tests, which can be ruinously expensive. For the Apollo F1 engine, for example, 2000 full-scale tests were required. Even today, thermoacoustic oscillations often re-appear unexpectedly at full engine test stage. Although the physics is well known, a novel approach to design is required. In this presentation, the parameters of a thermoacoustic model are inferred from many thousand automated experiments using inverse uncertainty quantification. The adjoint of this model is used to obtain cheaply the gradients of every unstable mode with respect to the model parameters. This gradient information is then used in an optimization algorithm to stabilize every thermoacoustic mode by subtly changing the geometry of the model.
Ivey, Chris D; Besser, John M; Ingersoll, Chris G; Wang, Ning; Rogers, D Christopher; Raimondo, Sandy; Bauer, Candice R; Hammer, Edward J
2017-03-01
Vernal pool fairy shrimp, Branchinecta lynchi, (Branchiopoda; Anostraca) and other fairy shrimp species have been listed as threatened or endangered under the US Endangered Species Act. Because few data exist about the sensitivity of Branchinecta spp. to toxic effects of contaminants, it is difficult to determine whether they are adequately protected by water quality criteria. A series of acute (24-h) lethality/immobilization tests was conducted with 3 species of fairy shrimp (B. lynchi, Branchinecta lindahli, and Thamnocephalus platyurus) and 10 chemicals with varying modes of toxic action: ammonia, potassium, chloride, sulfate, chromium(VI), copper, nickel, zinc, alachlor, and metolachlor. The same chemicals were tested in 48-h tests with other branchiopods (the cladocerans Daphnia magna and Ceriodaphnia dubia) and an amphipod (Hyalella azteca), and in 96-h tests with snails (Physa gyrina and Lymnaea stagnalis). Median effect concentrations (EC50s) for B. lynchi were strongly correlated (r 2 = 0.975) with EC50s for the commercially available fairy shrimp species T. platyurus for most chemicals tested. Comparison of EC50s for fairy shrimp and EC50s for invertebrate taxa tested concurrently and with other published toxicity data indicated that fairy shrimp were relatively sensitive to potassium and several trace metals compared with other invertebrate taxa, although cladocerans, amphipods, and mussels had similar broad toxicant sensitivity. Interspecies correlation estimation models for predicting toxicity to fairy shrimp from surrogate species indicated that models with cladocerans and freshwater mussels as surrogates produced the best predictions of the sensitivity of fairy shrimp to contaminants. The results of these studies indicate that fairy shrimp are relatively sensitive to a range of toxicants, but Endangered Species Act-listed fairy shrimp of the genus Branchinecta were not consistently more sensitive than other fairy shrimp taxa. Environ Toxicol Chem 2017;36:797-806. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
A mercury flow meter for ion thruster testing. [response time, thermal sensitivity
NASA Technical Reports Server (NTRS)
Wilbur, P. J.
1973-01-01
The theory of operation of the thermal flow meter is presented, and a theoretical model is used to determine design parameters for a device capable of measuring mercury flows in the range of 0 to 5 gm/hr. Flow meter construction is described. Tests performed using a positive displacement mercury pump as well as those performed with the device in the feed line of an operating thruster are discussed. A flow meter response time of about a minute and a sensitivity of about 10 mv/gm/hr are demonstrated. Additional work to relieve a sensitivity of the device to variations in ambient temperature is indicated to improve its quantitative performance.
Beltrame, Anna; Guerriero, Massimo; Angheben, Andrea; Gobbi, Federico; Requena-Mendez, Ana; Zammarchi, Lorenzo; Formenti, Fabio; Perandin, Francesca; Bisoffi, Zeno
2017-01-01
Background Schistosomiasis is a neglected infection affecting millions of people, mostly living in sub-Saharan Africa. Morbidity and mortality due to chronic infection are relevant, although schistosomiasis is often clinically silent. Different diagnostic tests have been implemented in order to improve screening and diagnosis, that traditionally rely on parasitological tests with low sensitivity. Aim of this study was to evaluate the accuracy of different tests for the screening of schistosomiasis in African migrants, in a non endemic setting. Methodology/Principal findings A retrospective study was conducted on 373 patients screened at the Centre for Tropical Diseases (CTD) in Negrar, Verona, Italy. Biological samples were tested with: stool/urine microscopy, Circulating Cathodic Antigen (CCA) dipstick test, ELISA, Western blot, immune-chromatographic test (ICT). Test accuracy and predictive values of the immunological tests were assessed primarily on the basis of the results of microscopy (primary reference standard): ICT and WB resulted the test with highest sensitivity (94% and 92%, respectively), with a high NPV (98%). CCA showed the highest specificity (93%), but low sensitivity (48%). The analysis was conducted also using a composite reference standard, CRS (patients classified as infected in case of positive microscopy and/or at least 2 concordant positive immunological tests) and Latent Class Analysis (LCA). The latter two models demonstrated excellent agreement (Cohen’s kappa: 0.92) for the classification of the results. In fact, they both confirmed ICT as the test with the highest sensitivity (96%) and NPV (97%), moreover PPV was reasonably good (78% and 72% according to CRS and LCA, respectively). ELISA resulted the most specific immunological test (over 99%). The ICT appears to be a suitable screening test, even when used alone. Conclusions The rapid test ICT was the most sensitive test, with the potential of being used as a single screening test for African migrants. PMID:28582412
Enhanced Sensitivity of Wireless Chemical Sensor Based on Love Wave Mode
NASA Astrophysics Data System (ADS)
Wang, Wen; Oh, Haekwan; Lee, Keekeun; Yang, Sangsik
2008-09-01
A 440 MHz wireless and passive Love-wave-based chemical sensor was developed for CO2 detection. The developed device was composed of a reflective delay line patterned on 41° YX LiNbO3 piezoelectric substrate, a poly(methyl methacrylate) (PMMA) waveguide layer, and Teflon AF 2400 sensitive film. A theoretical model is presented to describe wave propagation in Love wave devices with large piezoelectricity and to allow the design of an optimized structure. In wireless device testing using a network analyzer, infusion of CO2 into the testing chamber induced large phase shifts of the reflection peaks owing to the interaction between the sensing film and the test gas (CO2). Good linearity and repeatability were observed at CO2 concentrations of 0-350 ppm. The obtained sensitivity from the Love wave device was approximately 7.07° ppm-1. The gas response properties of the fabricated Love-wave sensor in terms of linearity and sensitivity were provided, and a comparison to surface acoustic wave devices was also discussed.
Andersen, Flemming; Andersen, Kirsten H; Bernois, Armand; Brault, Christophe; Bruze, Magnus; Eudes, Hervé; Gadras, Catherine; Signoret, Anne-Cécile J; Mose, Kristian F; Müller, Boris P; Toulemonde, Bernard; Andersen, Klaus Ejner
2015-02-01
Oak moss absolute, an extract from the lichen Evernia prunastri, is a valued perfume ingredient but contains extreme allergens. To compare the elicitation properties of two preparations of oak moss absolute: 'classic oak moss', the historically used preparation, and 'new oak moss', with reduced contents of the major allergens atranol and chloroatranol. The two preparations were compared in randomized double-blinded repeated open application tests and serial dilution patch tests in 30 oak moss-sensitive volunteers and 30 non-allergic control subjects. In both test models, new oak moss elicited significantly less allergic contact dermatitis in oak moss-sensitive subjects than classic oak moss. The control subjects did not react to either of the preparations. New oak moss is still a fragrance allergen, but elicits less allergic contact dermatitis in previously oak moss-sensitized individuals, suggesting that new oak moss is less allergenic to non-sensitized individuals. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Petculescu, Andi G.; Lueptow, Richard M.
2005-01-01
In a previous paper [Y. Dain and R. M. Lueptow, J. Acoust. Soc. Am. 109, 1955 (2001)], a model of acoustic attenuation due to vibration-translation and vibration-vibration relaxation in multiple polyatomic gas mixtures was developed. In this paper, the model is improved by treating binary molecular collisions via fully pairwise vibrational transition probabilities. The sensitivity of the model to small variations in the Lennard-Jones parameters-collision diameter (σ) and potential depth (ɛ)-is investigated for nitrogen-water-methane mixtures. For a N2(98.97%)-H2O(338 ppm)-CH4(1%) test mixture, the transition probabilities and acoustic absorption curves are much more sensitive to σ than they are to ɛ. Additionally, when the 1% methane is replaced by nitrogen, the resulting mixture [N2(99.97%)-H2O(338 ppm)] becomes considerably more sensitive to changes of σwater. The current model minimizes the underprediction of the acoustic absorption peak magnitudes reported by S. G. Ejakov et al. [J. Acoust. Soc. Am. 113, 1871 (2003)]. .
NASA Astrophysics Data System (ADS)
Zou, Guang'an; Wang, Qiang; Mu, Mu
2016-09-01
Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.
Exploratory rearing: a context- and stress-sensitive behavior recorded in the open-field test.
Sturman, Oliver; Germain, Pierre-Luc; Bohacek, Johannes
2018-02-16
Stressful experiences are linked to anxiety disorders in humans. Similar effects are observed in rodent models, where anxiety is often measured in classic conflict tests such as the open-field test. Spontaneous rearing behavior, in which rodents stand on their hind legs to explore, can also be observed in this test yet is often ignored. We define two forms of rearing, supported rearing (in which the animal rears against the walls of the arena) and unsupported rearing (in which the animal rears without contacting the walls of the arena). Using an automated open-field test, we show that both rearing behaviors appear to be strongly context dependent and show clear sex differences, with females rearing less than males. We show that unsupported rearing is sensitive to acute stress, and is reduced under more averse testing conditions. Repeated testing and handling procedures lead to changes in several parameters over varying test sessions, yet unsupported rearing appears to be rather stable within a given animal. Rearing behaviors could therefore provide an additional measure of anxiety in rodents relevant for behavioral studies, as they appear to be highly sensitive to context and may be used in repeated testing designs.
Roelandt, S; Van der Stede, Y; Czaplicki, G; Van Loo, H; Van Driessche, E; Dewulf, J; Hooyberghs, J; Faes, C
2015-06-06
Currently, there are no perfect reference tests for the in vivo detection of Neospora caninum infection. Two commercial N caninum ELISA tests are currently used in Belgium for bovine sera (TEST A and TEST B). The goal of this study is to evaluate these tests used at their current cut-offs, with a no gold standard approach, for the test purpose of (1) demonstration of freedom of infection at purchase and (2) diagnosis in aborting cattle. Sera of two study populations, Abortion population (n=196) and Purchase population (n=514), were selected and tested with both ELISA's. Test results were entered in a Bayesian model with informative priors on population prevalences only (Scenario 1). As sensitivity analysis, two more models were used: one with informative priors on test diagnostic accuracy (Scenario 2) and one with all priors uninformative (Scenario 3). The accuracy parameters were estimated from the first model: diagnostic sensitivity (Test A: 93.54 per cent-Test B: 86.99 per cent) and specificity (Test A: 90.22 per cent-Test B: 90.15 per cent) were high and comparable (Bayesian P values >0.05). Based on predictive values in the two study populations, both tests were fit for purpose, despite an expected false negative fraction of ±0.5 per cent in the Purchase population and ±5 per cent in the Abortion population. In addition, a false positive fraction of ±3 per cent in the overall Purchase population and ±4 per cent in the overall Abortion population was found. British Veterinary Association.
Mamtani, Manju; Jawahirani, Anil; Das, Kishor; Rughwani, Vinky; Kulkarni, Hemant
2006-08-01
It is being increasingly recognized that a majority of the countries in the thalassemia-belt need a cost-effective screening program as the first step towards control of thalassemia. Although the naked eye single tube red cell osmotic fragility test (NESTROFT) has been considered to be a very effective screening tool for beta-thalassemia trait, assessment of its diagnostic performance has been affected with the reference test- and verification-bias. Here, we set out to provide estimates of sensitivity and specificity of NESTROFT corrected for these potential biases. We conducted a cross-sectional diagnostic test evaluation study using data from 1563 subjects from Central India with a high prevalence of beta-thalassemia. We used latent class modelling after ensuring its validity to account for the reference test bias and global sensitivity analysis to control the verification bias. We also compared the results of latent class modelling with those of five discriminant indexes. We observed that across a range of cut-offs for the mean corpuscular volume (MCV) and the hemoglobin A2 (HbA2) concentration the average sensitivity and specificity of NESTROFT obtained from latent class modelling was 99.8 and 83.7%, respectively. These estimates were comparable to those characterizing the diagnostic performance of HbA2, which is considered by many as the reference test to detect beta-thalassemia. After correction for the verification bias these estimates were 93.4 and 97.2%, respectively. Combined with the inexpensive and quick disposition of NESTROFT, these results strongly support its candidature as a screening tool-especially in the resource-poor and high-prevalence settings.
Eggers, Ruben; Tuinenbreijer, Lizz; Kouwenhoven, Dorette; Verhaagen, Joost; Mason, Matthew R. J.
2016-01-01
The dorsal column lesion model of spinal cord injury targets sensory fibres which originate from the dorsal root ganglia and ascend in the dorsal funiculus. It has the advantages that fibres can be specifically traced from the sciatic nerve, verifiably complete lesions can be performed of the labelled fibres, and it can be used to study sprouting in the central nervous system from the conditioning lesion effect. However, functional deficits from this type of lesion are mild, making assessment of experimental treatment-induced functional recovery difficult. Here, five functional tests were compared for their sensitivity to functional deficits, and hence their suitability to reliably measure recovery of function after dorsal column injury. We assessed the tape removal test, the rope crossing test, CatWalk gait analysis, and the horizontal ladder, and introduce a new test, the inclined rolling ladder. Animals with dorsal column injuries at C4 or T7 level were compared to sham-operated animals for a duration of eight weeks. As well as comparing groups at individual timepoints we also compared the longitudinal data over the whole time course with linear mixed models (LMMs), and for tests where steps are scored as success/error, using generalized LMMs for binomial data. Although, generally, function recovered to sham levels within 2–6 weeks, in most tests we were able to detect significant deficits with whole time-course comparisons. On the horizontal ladder deficits were detected until 5–6 weeks. With the new inclined rolling ladder functional deficits were somewhat more consistent over the testing period and appeared to last for 6–7 weeks. Of the CatWalk parameters base of support was sensitive to cervical and thoracic lesions while hind-paw print-width was affected by cervical lesion only. The inclined rolling ladder test in combination with the horizontal ladder and the CatWalk may prove useful to monitor functional recovery after experimental treatment in this lesion model. PMID:26934672
Mainguy, Catherine; Bellon, Gabriel; Delaup, Véronique; Ginoux, Tiphanie; Kassai-Koupai, Behrouz; Mazur, Stéphane; Rabilloud, Muriel; Remontet, Laurent; Reix, Philippe
2017-01-01
Cystic fibrosis-related diabetes (CFRD) is a late cystic fibrosis (CF)-associated comorbidity whose prevalence is increasing sharply lifelong. Guidelines for glucose metabolism (GM) monitoring rely on the oral glucose tolerance test (OGTT). However, this test is neither sensitive nor specific. The aim of this study was to compare sensitivity and specificity of different methods for GM monitoring in children and adolescents with CF. Continuous glucose monitoring system (CGMS), used as the reference method, was compared with the OGTT, intravenous glucose tolerance test (IGTT), homeostasis model assessment index of insulin resistance (HOMA-IR), homeostasis model assessment index of β-cell function (HOMA-%B) and glycated haemoglobin A1C. Patients were classified into three groups according to CGMS: normal glucose tolerance (NGT), impaired glucose tolerance (IGT) and diabetes mellitus (DM). Twenty-nine patients (median age: 13.1 years) were recruited. According to CGMS, 11 had DM, 12 IGT and six NGT, whereas OGTT identified three patients with DM and five with IGT. While 13 of 27 had insulin deficiency according to IGTT, there was 19 of 28 according to HOMA-%B. According to HOMA-IR, 12 of 28 had insulin resistance. HOMA-%B was the most sensitive method for CFRD screening [sensitivity 91% (95% CI), specificity 47% (95% CI) and negative predictive value 89% (95% CI)]. OGTT showed the weak capacity to diagnose DM in CF and should no longer be considered as the reference method for CFRD screening in patients with CF. In our study, HOMA-%B showed promising metrics for CFRD screening. Finally, CGMS revealed that pathological glucose excursions were frequent even early in life.
NASA Astrophysics Data System (ADS)
Borge, Rafael; Alexandrov, Vassil; José del Vas, Juan; Lumbreras, Julio; Rodríguez, Encarnacion
Meteorological inputs play a vital role on regional air quality modelling. An extensive sensitivity analysis of the Weather Research and Forecasting (WRF) model was performed, in the framework of the Integrated Assessment Modelling System for the Iberian Peninsula (SIMCA) project. Up to 23 alternative model configurations, including Planetary Boundary Layer schemes, Microphysics, Land-surface models, Radiation schemes, Sea Surface Temperature and Four-Dimensional Data Assimilation were tested in a 3 km spatial resolution domain. Model results for the most significant meteorological variables, were assessed through a series of common statistics. The physics options identified to produce better results (Yonsei University Planetary Boundary Layer, WRF Single-Moment 6-class microphysics, Noah Land-surface model, Eta Geophysical Fluid Dynamics Laboratory longwave radiation and MM5 shortwave radiation schemes) along with other relevant user settings (time-varying Sea Surface Temperature and combined grid-observational nudging) where included in a "best case" configuration. This setup was tested and found to produce more accurate estimation of temperature, wind and humidity fields at surface level than any other configuration for the two episodes simulated. Planetary Boundary Layer height predictions showed a reasonable agreement with estimations derived from routine atmospheric soundings. Although some seasonal and geographical differences were observed, the model showed an acceptable behaviour overall. Despite being useful to define the most appropriate setup of the WRF model for air quality modelling over the Iberian Peninsula, this study provides a general overview of WRF sensitivity and can constitute a reference for future mesoscale meteorological modelling exercises.
Schallert, Timothy; Schmidt, Christine E.
2013-01-01
Cervical spinal cord injury (cSCI) can cause devastating neurological deficits, including impairment or loss of upper limb and hand function. A majority of the spinal cord injuries in humans occur at the cervical levels. Therefore, developing cervical injury models and developing relevant and sensitive behavioral tests is of great importance. Here we describe the use of a newly developed forelimb step-alternation test after cervical spinal cord injury in rats. In addition, we describe two behavioral tests that have not been used after spinal cord injury: a postural instability test (PIT), and a pasta-handling test. All three behavioral tests are highly sensitive to injury and are easy to use. Therefore, we feel that these behavioral tests can be instrumental in investigating therapeutic strategies after cSCI. PMID:24084700
Khaing, Zin Z; Geissler, Sydney A; Schallert, Timothy; Schmidt, Christine E
2013-09-16
Cervical spinal cord injury (cSCI) can cause devastating neurological deficits, including impairment or loss of upper limb and hand function. A majority of the spinal cord injuries in humans occur at the cervical levels. Therefore, developing cervical injury models and developing relevant and sensitive behavioral tests is of great importance. Here we describe the use of a newly developed forelimb step-alternation test after cervical spinal cord injury in rats. In addition, we describe two behavioral tests that have not been used after spinal cord injury: a postural instability test (PIT), and a pasta-handling test. All three behavioral tests are highly sensitive to injury and are easy to use. Therefore, we feel that these behavioral tests can be instrumental in investigating therapeutic strategies after cSCI.
Boscaini, Camile; Pellanda, Lucia Campos
2015-01-01
Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.
Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition
NASA Technical Reports Server (NTRS)
Ewing, Anthony; Adams, Charles
2004-01-01
Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.
Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes
2011-01-01
Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnett, Jonathan L.; Miley, Harry S.; Milbrath, Brian D.
In 2014 the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) undertook the Integrated Field Exercise (IFE) in Jordan. The exercise consisted of a simulated 0.5 – 2 kT underground explosion triggering an On-site Inspection (OSI) to search for evidence of a Treaty violation. This research evaluates two of the OSI techniques, including laboratory-based gamma-spectrometry of soil samples and in situ gamma-spectrometry for 17 particulate radionuclides indicative of nuclear weapon tests. The detection sensitivity is evaluated using real IFE and model data. It indicates that higher sensitivity laboratory measurements are the optimum technique during the IFE and OSI timeframes.
ERIC Educational Resources Information Center
Gudino, Omar G.; Nadeem, Erum; Kataoka, Sheryl H.; Lau, Anna S.
2012-01-01
Urban Latino youth are exposed to high rates of violence, which increases risk for diverse forms of psychopathology. The current study aims to increase specificity in predicting responses by testing the hypothesis that youths' reinforcement sensitivity--behavioral inhibition (BIS) and behavioral approach (BAS)--is associated with specific clinical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, Mark C.; Sham, Sam; Wang, Yanli
This report summarizes the experiments performed in FY17 on Gr. 91 steels. The testing of Gr. 91 has technical significance because, currently, it is the only approved material for Class A construction that is strongly cyclic softening. Specific FY17 testing includes the following activities for Gr. 91 steel. First, two types of key feature testing have been initiated, including two-bar thermal ratcheting and Simplified Model Testing (SMT). The goal is to qualify the Elastic – Perfectly Plastic (EPP) design methodologies and to support incorporation of these rules for Gr. 91 into the ASME Division 5 Code. The preliminary SMT testmore » results show that Gr. 91 is most damaging when tested with compression hold mode under the SMT creep fatigue testing condition. Two-bar thermal ratcheting test results at a temperature range between 350 to 650o C were compared with the EPP strain limits code case evaluation, and the results show that the EPP strain limits code case is conservative. The material information obtained from these key feature tests can also be used to verify its material model. Second, to provide experimental data in support of the viscoplastic material model development at Argonne National Laboratory, selective tests were performed to evaluate the effect of cyclic softening on strain rate sensitivity and creep rates. The results show the prior cyclic loading history decreases the strain rate sensitivity and increases creep rates. In addition, isothermal cyclic stress-strain curves were generated at six different temperatures, and a nonisothermal thermomechanical testing was also performed to provide data to calibrate the viscoplastic material model.« less
Fragment-based prediction of skin sensitization using recursive partitioning
NASA Astrophysics Data System (ADS)
Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2011-09-01
Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.
Bart, Sylvain; Amossé, Joël; Lowe, Christopher N; Mougin, Christian; Péry, Alexandre R R; Pelosi, Céline
2018-06-21
Ecotoxicological tests with earthworms are widely used and are mandatory for the risk assessment of pesticides prior to registration and commercial use. The current model species for standardized tests is Eisenia fetida or Eisenia andrei. However, these species are absent from agricultural soils and often less sensitive to pesticides than other earthworm species found in mineral soils. To move towards a better assessment of pesticide effects on non-target organisms, there is a need to perform a posteriori tests using relevant species. The endogeic species Aporrectodea caliginosa (Savigny, 1826) is representative of cultivated fields in temperate regions and is suggested as a relevant model test species. After providing information on its taxonomy, biology, and ecology, we reviewed current knowledge concerning its sensitivity towards pesticides. Moreover, we highlighted research gaps and promising perspectives. Finally, advice and recommendations are given for the establishment of laboratory cultures and experiments using this soil-dwelling earthworm species.
Measuring preschool cognitive growth while it's still happening: the Learning Express.
McDermott, Paul A; Fantuzzo, John W; Waterman, Clare; Angelo, Lauren E; Warley, Heather P; Gadsden, Vivian L; Zhang, Xiuyuan
2009-10-01
Educators need accurate assessments of preschool cognitive growth to guide curriculum design, evaluation, and timely modification of their instructional programs. But available tests do not provide content breadth or growth sensitivity over brief intervals. This article details evidence for a multiform, multiscale test criterion-referenced to national standards for alphabet knowledge, vocabulary, listening comprehension and mathematics, developed in field trials with 3433 3-5(1/2)-year-old Head Start children. The test enables repeated assessments (20-30 min per time point) over a school year. Each subscale is calibrated to yield scaled scores based on item response theory and Bayesian estimation of ability. Multilevel modeling shows that nearly all score variation is associated with child performance rather than examiner performance and individual growth-curve modeling demonstrates the high sensitivity of scores to child growth, controlled for age, sex, prior schooling, and language and special needs status.
Constitutive modeling of the dynamic-tensile-extrusion test of PTFE
NASA Astrophysics Data System (ADS)
Resnyansky, A. D.; Brown, E. N.; Trujillo, C. P.; Gray, G. T.
2017-01-01
Use of polymers in defense, aerospace and industrial applications under extreme loading conditions makes prediction of the behavior of these materials very important. Crucial to this is knowledge of the physical damage response in association with phase transformations during loading and the ability to predict this via multi-phase simulation accounting for thermodynamical non-equilibrium and strain rate sensitivity. The current work analyzes Dynamic-Tensile-Extrusion (Dyn-Ten-Ext) experiments on polytetrafluoroethylene (PTFE). In particular, the phase transition during loading and subsequent tension are analyzed using a two-phase rate sensitive material model implemented in the CTH hydrocode. The calculations are compared with experimental high-speed photography. Deformation patterns and their link with changing loading modes are analyzed numerically and correlated to the test observations. It is concluded that the phase transformation is not as critical to the response of PTFE under Dyn-Ten-Ext loading as it is during the Taylor rod impact testing.
Ekong, Pius S; Sanderson, Michael W; Bello, Nora M; Noll, Lance W; Cernicchiaro, Natalia; Renter, David G; Bai, Jianfa; Nagaraja, T G
2017-12-01
Cattle are a reservoir for Escherichia coli O157 and they shed the pathogen in their feces. Fecal contaminants on the hides can be transferred onto carcasses during processing at slaughter plants, thereby serving as a source of foodborne infection in humans. The detection of E. coli O157 in cattle feces is based on culture, immunological, and molecular methods We evaluated the diagnostic sensitivity and specificity of one culture- and two PCR-based tests for the detection of E. coli O157 in cattle feces, and its true prevalence using a Bayesian implementation of latent class models. A total of 576 fecal samples were collected from the floor of pens of finishing feedlot cattle in the central United States during summer 2013. Samples were enriched and subjected to detection of E. coli O157 by culture (immunomagnetic separation, plating on a selective medium, latex agglutination, and indole testing), conventional PCR (cPCR), and multiplex quantitative PCR (mqPCR). The statistical models assumed conditional dependence of the PCR tests and high specificity for culture (mode=99%; 5th percentile=97%). Prior estimates of test parameters were elicited from three experts. Estimated posterior sensitivity (posterior median and 95% highest posterior density intervals) of culture, cPCR, and mqPCR was 49.1% (44.8-53.4%), 59.7% (55.3-63.9%), and 97.3% (95.1-99.0%), respectively. Estimated posterior specificity of culture, cPCR, and mqPCR were 98.7% (96.8-99.8%), 94.1% (87.4-99.1%), and 94.8% (84.1-99.9%), respectively. True prevalence was estimated at 91.3% (88.1-94.2%). There was evidence of a weak conditional dependence between cPCR and mqPCR amongst test positive samples, but no evidence of conditional dependence amongst test negative samples. Sensitivity analyses showed that overall our posterior inference was rather robust to the choice of priors, except for inference on specificity of mqPCR, which was estimated with considerable uncertainty. Our study evaluates performance of three diagnostic tests for detection of E. coli O157 in feces of feedlot cattle which is important for quantifying true fecal prevalence and adjusting for test error in risk modeling. Copyright © 2017 Elsevier B.V. All rights reserved.
Zeeb, Fiona D; Li, Zhaoxia; Fisher, Daniel C; Zack, Martin H; Fletcher, Paul J
2017-11-01
An animal model of gambling disorder, previously known as pathological gambling, could advance our understanding of the disorder and help with treatment development. We hypothesized that repeated exposure to uncertainty during gambling induces behavioural and dopamine (DA) sensitization - similar to chronic exposure to drugs of abuse. Uncertainty exposure (UE) may also increase risky decision-making in an animal model of gambling disorder. Male Sprague Dawley rats received 56 UE sessions, during which animals responded for saccharin according to an unpredictable, variable ratio schedule of reinforcement (VR group). Control animals responded on a predictable, fixed ratio schedule (FR group). Rats yoked to receive unpredictable reward were also included (Y group). Animals were then tested on the Rat Gambling Task (rGT), an analogue of the Iowa Gambling Task, to measure decision-making. Compared with the FR group, the VR and Y groups experienced a greater locomotor response following administration of amphetamine. On the rGT, the FR and Y groups preferred the advantageous options over the risky, disadvantageous options throughout testing (40 sessions). However, rats in the VR group did not have a significant preference for the advantageous options during sessions 20-40. Amphetamine had a small, but significant, effect on decision-making only in the VR group. After rGT testing, only the VR group showed greater hyperactivity following administration of amphetamine compared with the FR group. Reward uncertainty was the only gambling feature modelled. Actively responding for uncertain reward likely sensitized the DA system and impaired the ability to make optimal decisions, modelling some aspects of gambling disorder.
Canopy reflectance modeling in a tropical wooded grassland
NASA Technical Reports Server (NTRS)
Simonett, David; Franklin, Janet
1986-01-01
Geometric/optical canopy reflectance modeling and spatial/spectral pattern recognition is used to study the form and structure of savanna in West Africa. An invertible plant canopy reflectance model is tested for its ability to estimate the amount of woody vegetation from remotely sensed data in areas of sparsely wooded grassland. Dry woodlands and wooded grasslands, commonly referred to as savannas, are important ecologically and economically in Africa, and cover approximately forty percent of the continent by some estimates. The Sahel and Sudan savannas make up the important and sensitive transition zone between the tropical forests and the arid Sahara region. The depletion of woody cover, used for fodder and fuel in these regions, has become a very severe problem for the people living there. LANDSAT Thematic Mapper (TM) data is used to stratify woodland and wooded grassland into areas of relatively homogeneous canopy cover, and then an invertible forest canopy reflectance model is applied to estimate directly the height and spacing of the trees in the stands. Because height and spacing are proportional to biomass in some cases, a successful application of the segmentation/modeling techniques will allow direct estimation of tree biomass, as well as cover density, over significant areas of these valuable and sensitive ecosystems. The model being tested in sites in two different bioclimatic zones in Mali, West Africa, will be used for testing the canopy model. Sudanian zone crop/woodland test sites were located in the Region of Segou, Mali.
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A; Peters, Anne
2013-01-07
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds.
Diffenbaugh, N.S.; Sloan, L.C.; Snyder, M.A.; Bell, J.L.; Kaplan, J.; Shafer, S.L.; Bartlein, P.J.
2003-01-01
Anthropogenic increases in atmospheric carbon dioxide (CO2) concentrations may affect vegetation distribution both directly through changes in photosynthesis and water-use efficiency, and indirectly through CO2-induced climate change. Using an equilibrium vegetation model (BIOME4) driven by a regional climate model (RegCM2.5), we tested the sensitivity of vegetation in the western United States, a topographically complex region, to the direct, indirect, and combined effects of doubled preindustrial atmospheric CO2 concentrations. Those sensitivities were quantified using the kappa statistic. Simulated vegetation in the western United States was sensitive to changes in atmospheric CO2 concentrations, with woody biome types replacing less woody types throughout the domain. The simulated vegetation was also sensitive to climatic effects, particularly at high elevations, due to both warming throughout the domain and decreased precipitation in key mountain regions such as the Sierra Nevada of California and the Cascade and Blue Mountains of Oregon. Significantly, when the direct effects of CO2 on vegetation were tested in combination with the indirect effects of CO2-induced climate change, new vegetation patterns were created that were not seen in either of the individual cases. This result indicates that climatic and nonclimatic effects must be considered in tandem when assessing the potential impacts of elevated CO2 levels.
Tarr, Gillian A M; Eickhoff, Jens C; Koepke, Ruth; Hopfensperger, Daniel J; Davis, Jeffrey P; Conway, James H
2013-07-15
Pertussis remains difficult to control. Imperfect sensitivity of diagnostic tests and lack of specific guidance regarding interpretation of negative test results among patients with compatible symptoms may contribute to its spread. In this study, we examined whether additional pertussis cases could be identified if persons with negative pertussis test results were routinely investigated. We conducted interviews among 250 subjects aged ≤18 years with pertussis polymerase chain reaction (PCR) results reported from 2 reference laboratories in Wisconsin during July-September 2010 to determine whether their illnesses met the Centers for Disease Control and Prevention's clinical case definition (CCD) for pertussis. PCR validity measures were calculated using the CCD as the standard for pertussis disease. Two Bayesian latent class models were used to adjust the validity measures for pertussis detectable by 1) culture alone and 2) culture and/or more sensitive measures such as serology. Among 190 PCR-negative subjects, 54 (28%) had illnesses meeting the CCD. In adjusted analyses, PCR sensitivity and the negative predictive value were 1) 94% and 99% and 2) 43% and 87% in the 2 types of models, respectively. The models suggested that public health follow-up of reported pertussis patients with PCR-negative results leads to the detection of more true pertussis cases than follow-up of PCR-positive persons alone. The results also suggest a need for a more specific pertussis CCD.
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
Thabet, Ahmed; Zhang, Runhui; Alnassan, Alaa-Aldin; Daugschies, Arwid; Bangoura, Berit
2017-01-15
Availability of an accurate in vitro assay is a crucial demand to determine sensitivity of Eimeria spp. field strains toward anticoccidials routinely. In this study we tested in vitro models of Eimeria tenella using various polyether ionophores (monensin, salinomycin, maduramicin, and lasalocid) and toltrazuril. Minimum inhibitory concentrations (MIC 95 , MIC 50/95 ) for the tested anticoccidials were defined based on a susceptible reference (Houghton strain), Ref-1. In vitro sporozoite invasion inhibition assay (SIA) and reproduction inhibition assay (RIA) were applied on sensitive laboratory (Ref-1 and Ref-2) and field (FS-1, FS-2, and FS-3) strains to calculate percent of inhibition under exposure of these strains to the various anticoccidials (%I SIA and%I RIA, respectively). The in vitro data were related to oocyst excretion, lesion scores, performance, and global resistance indices (GI) assessed in experimentally infected chickens. Polyether ionophores applied in the RIA were highly effective at MIC 95 against Ref-1 and Ref-2 (%I RIA ≥95%). In contrast, all tested field strains displayed reduced to low efficacy (%I RIA <95%).%I RIA values significantly correlated with oocyst excretion determined in the animal model (p<0.01) for polyether ionophores. However, this relationship could not be demonstrated for toltrazuril due to unexpected lack of in vitro sensitivity in Ref-2 (%I RIA =56.1%). In infected chickens, toltrazuril was generally effective (GI>89%) against all strains used in this study. However, adjusted GI (GI adj ) for toltrazuril-treated groups exhibited differences between reference and field strains which might indicate varying sensitivity. RIA is a suitable in vitro tool to detect sensitivity of E. tenella towards polyether ionophores, and may thus help to reduce, replace, or refine use of animal experimentation for in vivo sensitivity assays. Copyright © 2016 Elsevier B.V. All rights reserved.
Process-based modelling of NH3 exchange with grazed grasslands
NASA Astrophysics Data System (ADS)
Móring, Andrea; Vieno, Massimo; Doherty, Ruth M.; Milford, Celia; Nemitz, Eiko; Twigg, Marsailidh M.; Horváth, László; Sutton, Mark A.
2017-09-01
In this study the GAG model, a process-based ammonia (NH3) emission model for urine patches, was extended and applied for the field scale. The new model (GAG_field) was tested over two modelling periods, for which micrometeorological NH3 flux data were available. Acknowledging uncertainties in the measurements, the model was able to simulate the main features of the observed fluxes. The temporal evolution of the simulated NH3 exchange flux was found to be dominated by NH3 emission from the urine patches, offset by simultaneous NH3 deposition to areas of the field not affected by urine. The simulations show how NH3 fluxes over a grazed field in a given day can be affected by urine patches deposited several days earlier, linked to the interaction of volatilization processes with soil pH dynamics. Sensitivity analysis showed that GAG_field was more sensitive to soil buffering capacity (β), field capacity (θfc) and permanent wilting point (θpwp) than the patch-scale model. The reason for these different sensitivities is dual. Firstly, the difference originates from the different scales. Secondly, the difference can be explained by the different initial soil pH and physical properties, which determine the maximum volume of urine that can be stored in the NH3 source layer. It was found that in the case of urine patches with a higher initial soil pH and higher initial soil water content, the sensitivity of NH3 exchange to β was stronger. Also, in the case of a higher initial soil water content, NH3 exchange was more sensitive to the changes in θfc and θpwp. The sensitivity analysis showed that the nitrogen content of urine (cN) is associated with high uncertainty in the simulated fluxes. However, model experiments based on cN values randomized from an estimated statistical distribution indicated that this uncertainty is considerably smaller in practice. Finally, GAG_field was tested with a constant soil pH of 7.5. The variation of NH3 fluxes simulated in this way showed a good agreement with those from the simulations with the original approach, accounting for a dynamically changing soil pH. These results suggest a way for model simplification when GAG_field is applied later at regional scale.
REVISED TREATMENT OF N2 O5 HYDROLYSIS IN CMAQ
In this presentation, revised treatment of homogeneous and heterogeneous hydrolysis of dinitrogen pentoxide in the Community Multiscale Air Quality model version 4.6 are described. A series of model sensitivity tests are conducted and compared with observations of total atmosphe...
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Experimental Searches for Exotic Short-Range Forces Using Mechanical Oscillators
NASA Astrophysics Data System (ADS)
Weisman, Evan
Experimental searches for forces beyond gravity and electromagnetism at short range have attracted a great deal of attention over the last decade. In this thesis I describe the test mass development for two new experiments searching for forces below 1 mm. Both modify a previous experiment that used 1 kHz mechanical oscillators as test masses with a stiff conducting shield between them to suppress backgrounds, a promising technique for probing exceptionally small distances at the limit of instrumental thermal noise. To further reduce thermal noise, one experiment will use plated silicon test masses at cryogenic temperatures. The other experiment, which searches for spin-dependent interactions, will apply the spin-polarizable material Dy3Fe5O 12 to the test mass surfaces. This material exhibits orbital compensation of the magnetism associated with its intrinsic electron spin, minimizing magnetic backgrounds. Several plated silicon test mass prototypes were fabricated using photolithography (useful in both experiments), and spin-dependent materials were synthesized with a simple chemical recipe. Both silicon and spin-dependent test masses demonstrate the mechanical and magnetic properties necessary for sensitive experiments. I also describe sensitivity calculations of another proposed spin-dependent experiment, based on a modified search for the electron electric dipole moment, which show unprecedented sensitivity to exotic monopole-dipole forces. Inspired by a finite element model, a study attempting to maximize detector quality factor versus geometry is also presented, with experimental results so far not explained by the model.
Criteria for establishing water quality standards that are protective of all native biota are generally based upon laboratory toxicity tests. These test utilize common model organisms that have established test methods. However, only a small portion of species have established ...
VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry
Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.
2015-01-01
Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002
Impact of meteorology on air quality modeling over the Po valley in northern Italy
NASA Astrophysics Data System (ADS)
Pernigotti, D.; Georgieva, E.; Thunis, P.; Bessagnet, B.
2012-05-01
A series of sensitivity tests has been performed using both a mesoscale meteorological model (MM5) and a chemical transport model (CHIMERE) to better understand the reasons why all models underestimate particulate matter concentrations in the Po valley in winter. Different options are explored to nudge meteorological observations from regulatory networks into MM5 in order to improve model performances, especially during the low wind speed regimes frequently present in this area. The sensitivity of the CHIMERE modeled particulate matter concentrations to these different meteorological inputs are then evaluated for the January 2005 time period. A further analysis of the CHIMERE model results revealed the need of improving the parametrization of the in-cloud scavenging and vertical diffusivity schemes; such modifications are relevant especially when the model is applied under mist, fog and low stratus conditions, which frequently occur in the Po valley during winter. The sensitivity of modeled particulate matter concentrations to turbulence parameters, wind, temperature and cloud liquid water content in one of the most polluted and complex areas in Europe is finally discussed.
Torsional Vibration in the National Wind Technology Center’s 2.5-Megawatt Dynamometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Keller, Jonathan; Wallen, Robb
2016-08-31
This report documents the torsional drivetrain dynamics of the NWTC's 2.5-megawatt dynamometer as identified experimentally and as calculated using lumped parameter models using known inertia and stiffness parameters. The report is presented in two parts beginning with the identification of the primary torsional modes followed by the investigation of approaches to damp the torsional vibrations. The key mechanical parameters for the lumped parameter models and justification for the element grouping used in the derivation of the torsional modes are presented. The sensitivities of the torsional modes to different test article properties are discussed. The oscillations observed from the low-speed andmore » generator torque measurements were used to identify the extent of damping inherently achieved through active and passive compensation techniques. A simplified Simulink model of the dynamometer test article integrating the electro-mechanical power conversion and control features was established to emulate the torque behavior that was observed during testing. The torque response in the high-speed, low-speed, and generator shafts were tested and validated against experimental measurements involving step changes in load with the dynamometer operating under speed-regulation mode. The Simulink model serves as a ready reference to identify the torque sensitivities to various system parameters and to explore opportunities to improve torsional damping under different conditions.« less
Numerical simulation study on thermal response of PBX 9501 to low velocity impact
NASA Astrophysics Data System (ADS)
Lou, Jianfeng; Zhou, Tingting; Zhang, Yangeng; Zhang, Xiaoli
2017-01-01
Impact sensitivity of solid high explosives, an important index in evaluating the safety and performance of explosives, is an important concern in handling, storage, and shipping procedures. It is a great threat for either bare dynamite or shell charge when subjected to low velocity impact involved in traffic accidents or charge piece drops. The Steven test is an effective tool to study the relative sensitivity of various explosives. In this paper, we built the numerical simulation method involving mechanical, thermo and chemical properties of Steven test based on the thermo-mechanical coupled material model. In the model, the stress-strain relationship is described by dynamic plasticity model, the thermal effect of the explosive induced by impact is depicted by isotropic thermal material model, the chemical reaction of explosives is described by Arrhenius reaction rate law, and the effects of heating and melting on mechanical properties and thermal properties of materials are also taken into account. Specific to the standard Steven test, the thermal and mechanical response rules of PBX 9501 at various impact velocities were numerically analyzed, and the threshold velocity of explosive initiation was obtained, which is in good agreement with experimental results. In addition, the effect of confine condition of test device to the threshold velocity was explored.
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Jesse D.; Chang, Grace; Magalen, Jason
A modified version of an indust ry standard wave modeling tool was evaluated, optimized, and utilized to investigate model sensitivity to input parameters a nd wave energy converter ( WEC ) array deployment scenarios. Wave propagation was investigated d ownstream of the WECs to evaluate overall near - and far - field effects of WEC arrays. The sensitivity study illustrate d that wave direction and WEC device type we r e most sensitive to the variation in the model parameters examined in this study . Generally, the changes in wave height we re the primary alteration caused by the presencemore » of a WEC array. Specifically, W EC device type and subsequently their size directly re sult ed in wave height variations; however, it is important to utilize ongoing laboratory studies and future field tests to determine the most appropriate power matrix values for a particular WEC device and configuration in order to improve modeling results .« less
Optimization of a fiber optic flexible disk microphone
NASA Astrophysics Data System (ADS)
Zhang, Gang; Yu, Benli; Wang, Hui; Liu, Fei; Peng, Jun; Wu, Xuqiang
2011-11-01
An optimized design of a fiber optic flexible disk microphone is presented and verified experimentally. The phase sensitivity of optical fiber microphone (both the ideal model with a simply supported disk (SSD) and the model with a clamped disk (CLD)) is analyzed by utilizing theory of plates and shells. The results show that the microphones have an optimum length of the sensing arm when inner radius of the fiber coils, radius and Poisson's radio of the flexible disk have been determined. Under a typical condition depicted in this paper, an optimum phase sensitivity for SSD model of 27.72 rad/Pa (-91.14 dB re 1 rad/μPa) and an optimum phase sensitivity for CLD model of 3.18 rad/Pa (-109.95 dB re 1 rad/μPa), can be achieved in theory. Several sample microphones are fabricated and tested. The experimental results are basically consistent with the theoretical analysis.
Goeree, Ron; Blackhouse, Gord; Bowen, James M; O'Reilly, Daria; Sutherland, Simone; Hopkins, Robert; Chow, Benjamin; Freeman, Michael; Provost, Yves; Dennie, Carole; Cohen, Eric; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan; Paul, Narinder; Parker, John D
2013-10-01
Conventional coronary angiography (CCA) is the standard diagnostic for coronary artery disease (CAD), but multi-detector computed tomography coronary angiography (CTCA) is a non-invasive alternative. A multi-center coverage with evidence development study was undertaken and combined with an economic model to estimate the cost-effectiveness of CTCA followed by CCA vs CCA alone. Alternative assumptions were tested in patient scenario and sensitivity analyses. CCA was found to dominate CTCA, however, CTCA was relatively more cost-effective in females, in advancing age, in patients with lower pre-test probabilities of CAD, the higher the sensitivity of CTCA and the lower the probability of undergoing a confirmatory CCA following a positive CTCA. RESULTS were very sensitive to alternative patient populations and modeling assumptions. Careful consideration of patient characteristics, procedures to improve the diagnostic yield of CTCA and selective use of CCA following CTCA will impact whether CTCA is cost-effective or dominates CCA.
Sensitivities of Greenland ice sheet volume inferred from an ice sheet adjoint model
NASA Astrophysics Data System (ADS)
Heimbach, P.; Bugnion, V.
2009-04-01
We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. Since its introduction by MacAyeal (1992), the adjoint method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters such as basal conditions. However, no attempt has been made to extend this method to comprehensive ice sheet models. As a first step toward the use of adjoints of comprehensive three-dimensional ice sheet models we have generated an adjoint of the ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated by means of the automatic differentiation (AD) tool TAF. The AD tool generates exact source code representing the tangent linear and adjoint model of the nonlinear parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic with respect to the controls, and can be efficiently calculated via the adjoint. By way of example, we determine the sensitivity of the total Greenland ice volume to various control variables, such as spatial fields of basal flow parameters, surface and basal forcings, and initial conditions. Reliability of the adjoint was tested through finite-difference perturbation calculations for various control variables and perturbation regions. Besides confirming qualitative aspects of ice sheet sensitivities, such as expected regional variations, we detect regions where model sensitivities are seemingly unexpected or counter-intuitive, albeit ``real'' in the sense of actual model behavior. An example is inferred regions where sensitivities of ice sheet volume to basal sliding coefficient are positive, i.e. where a local increase in basal sliding parameter increases the ice sheet volume. Similarly, positive ice temperature sensitivities in certain parts of the ice sheet are found (in most regions it is negativ, i.e. an increase in temperature decreases ice sheet volume), the detection of which seems highly unlikely if only conventional perturbation experiments had been used. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. Available adjoint code generation tools now open up a variety of novel model applications, notably with regard to sensitivity and uncertainty analyses and ice sheet state estimation or data assimilation.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotte, F.P.; Doughty, C.; Birkholzer, J.
2010-11-01
The ability to reliably predict flow and transport in fractured porous rock is an essential condition for performance evaluation of geologic (underground) nuclear waste repositories. In this report, a suite of programs (TRIPOLY code) for calculating and analyzing flow and transport in two-dimensional fracture-matrix systems is used to model single-well injection-withdrawal (SWIW) tracer tests. The SWIW test, a tracer test using one well, is proposed as a useful means of collecting data for site characterization, as well as estimating parameters relevant to tracer diffusion and sorption. After some specific code adaptations, we numerically generated a complex fracture-matrix system for computationmore » of steady-state flow and tracer advection and dispersion in the fracture network, along with solute exchange processes between the fractures and the porous matrix. We then conducted simulations for a hypothetical but workable SWIW test design and completed parameter sensitivity studies on three physical parameters of the rock matrix - namely porosity, diffusion coefficient, and retardation coefficient - in order to investigate their impact on the fracture-matrix solute exchange process. Hydraulic fracturing, or hydrofracking, is also modeled in this study, in two different ways: (1) by increasing the hydraulic aperture for flow in existing fractures and (2) by adding a new set of fractures to the field. The results of all these different tests are analyzed by studying the population of matrix blocks, the tracer spatial distribution, and the breakthrough curves (BTCs) obtained, while performing mass-balance checks and being careful to avoid some numerical mistakes that could occur. This study clearly demonstrates the importance of matrix effects in the solute transport process, with the sensitivity studies illustrating the increased importance of the matrix in providing a retardation mechanism for radionuclides as matrix porosity, diffusion coefficient, or retardation coefficient increase. Interestingly, model results before and after hydrofracking are insensitive to adding more fractures, while slightly more sensitive to aperture increase, making SWIW tests a possible means of discriminating between these two potential hydrofracking effects. Finally, we investigate the possibility of inferring relevant information regarding the fracture-matrix system physical parameters from the BTCs obtained during SWIW testing.« less
Proposal of a short-form version of the Brazilian Food Insecurity Scale
dos Santos, Leonardo Pozza; Lindemann, Ivana Loraine; Motta, Janaína Vieira dos Santos; Mintem, Gicele; Bender, Eliana; Gigante, Denise Petrucci
2014-01-01
OBJECTIVE To propose a short version of the Brazilian Food Insecurity Scale. METHODS Two samples were used to test the results obtained in the analyses in two distinct scenarios. One of the studies was composed of 230 low income families from Pelotas, RS, Southern Brazil, and the other was composed of 15,575 women, whose data were obtained from the 2006 National Survey on Demography and Health. Two models were tested, the first containing seven questions, and the second, the five questions that were considered the most relevant ones in the concordance analysis. The models were compared to the Brazilian Food Insecurity Scale, and the sensitivity, specificity and accuracy parameters were calculated, as well as the kappa agreement test. RESULTS Comparing the prevalence of food insecurity between the Brazilian Food Insecurity Scale and the two models, the differences were around 2 percentage points. In the sensitivity analysis, the short version of seven questions obtained 97.8% and 99.5% in the Pelotas sample and in the National Survey on Demography and Health sample, respectively, while specificity was 100% in both studies. The five-question model showed similar results (sensitivity of 95.7% and 99.5% in the Pelotas sample and in the National Survey on Demography and Health sample, respectively). In the Pelotas sample, the kappa test of the seven-question version totaled 97.0% and that of the five-question version, 95.0%. In the National Survey on Demography and Health sample, the two models presented a 99.0% kappa. CONCLUSIONS We suggest that the model with five questions should be used as the short version of the Brazilian Food Insecurity Scale, as its results were similar to the original scale with a lower number of questions. This version needs to be administered to other populations in Brazil in order to allow for the adequate assessment of the validity parameters. PMID:25372169
Application of the pressure sensitive paint technique to steady and unsteady flow
NASA Technical Reports Server (NTRS)
Shimbo, Y.; Mehta, R.; Cantwell, B.
1996-01-01
Pressure sensitive paint is a newly-developed optical measurement technique with which one can get a continuous pressure distribution in much shorter time and lower cost than a conventional pressure tap measurement. However, most of the current pressure sensitive paint applications are restricted to steady pressure measurement at high speeds because of the small signal-to-noise ratio at low speed and a slow response to pressure changes. In the present study, three phases of work have been completed to extend the application of the pressure sensitive paint technique to low-speed testing and to investigate the applicability of the paint technique to unsteady flow. First the measurement system using a commercially available PtOEP/GP-197 pressure sensitive paint was established and applied to impinging jet measurements. An in-situ calibration using only five pressure tap data points was applied and the results showed good repeatability and good agreement with conventional pressure tap measurements on the whole painted area. The overall measurement accuracy in these experiments was found to be within 0.1 psi. The pressure sensitive paint technique was then applied to low-speed wind tunnel tests using a 60 deg delta wing model with leading edge blowing slots. The technical problems encountered in low-speed testing were resolved by using a high grade CCD camera and applying corrections to improve the measurement accuracy. Even at 35 m/s, the paint data not only agreed well with conventional pressure tap measurements but also clearly showed the suction region generated by the leading edge vortices. The vortex breakdown was also detected at alpha=30 deg. It was found that a pressure difference of 0.2 psi was required for a quantitative pressure measurement in this experiment and that temperature control or a parallel temperature measurement is necessary if thermal uniformity does not hold on the model. Finally, the pressure sensitive paint was applied to a periodically changing pressure field with a 12.8s time period. A simple first-order pole model was applied to deal with the phase lag of the paint. The unsteady pressure estimated from the time-changing pressure sensitive paint data agreed well with the pressure transducer data in regions of higher pressure and showed the possibility of extending the technique to unsteady pressure measurements. However, the model still needs further refinement based on the physics of the oxygen diffusion into the paint layer and the oxygen quenching on the paint luminescence.
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus
Smith, Jack W.; Everhart, J.E.; Dickson, W.C.; Knowler, W.C.; Johannes, R.S.
1988-01-01
Neural networks or connectionist models for parallel processing are not new. However, a resurgence of interest in the past half decade has occurred. In part, this is related to a better understanding of what are now referred to as hidden nodes. These algorithms are considered to be of marked value in pattern recognition problems. Because of that, we tested the ability of an early neural network model, ADAP, to forecast the onset of diabetes mellitus in a high risk population of Pima Indians. The algorithm's performance was analyzed using standard measures for clinical tests: sensitivity, specificity, and a receiver operating characteristic curve. The crossover point for sensitivity and specificity is 0.76. We are currently further examining these methods by comparing the ADAP results with those obtained from logistic regression and linear perceptron models using precisely the same training and forecasting sets. A description of the algorithm is included.
Quantification of photoacoustic microscopy images for ovarian cancer detection
NASA Astrophysics Data System (ADS)
Wang, Tianheng; Yang, Yi; Alqasemi, Umar; Kumavor, Patrick D.; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing
2014-03-01
In this paper, human ovarian tissues with malignant and benign features were imaged ex vivo by using an opticalresolution photoacoustic microscopy (OR-PAM) system. Several features were quantitatively extracted from PAM images to describe photoacoustic signal distributions and fluctuations. 106 PAM images from 18 human ovaries were classified by applying those extracted features to a logistic prediction model. 57 images from 9 ovaries were used as a training set to train the logistic model, and 49 images from another 9 ovaries were used to test our prediction model. We assumed that if one image from one malignant ovary was classified as malignant, it is sufficient to classify this ovary as malignant. For the training set, we achieved 100% sensitivity and 83.3% specificity; for testing set, we achieved 100% sensitivity and 66.7% specificity. These preliminary results demonstrate that PAM could be extremely valuable in assisting and guiding surgeons for in vivo evaluation of ovarian tissue.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
NASA Astrophysics Data System (ADS)
Torries, Brian; Shamsaei, Nima
2017-12-01
The effects of different cooling rates, as achieved by varying the interlayer time interval, on the fatigue behavior of additively manufactured Ti-6Al-4V specimens were investigated and modeled via a microstructure-sensitive fatigue model. Comparisons are made between two sets of specimens fabricated via Laser Engineered Net Shaping (LENS™), with variance in interlayer time interval accomplished by depositing either one or two specimens per print operation. Fully reversed, strain-controlled fatigue tests were conducted, with fractography following specimen failure. A microstructure-sensitive fatigue model was calibrated to model the fatigue behavior of both sets of specimens and was found to be capable of correctly predicting the longer fatigue lives of the single-built specimens and the reduced scatter of the double-built specimens; all data points fell within the predicted upper and lower bounds of fatigue life. The time interval effects and the ability to be modeled are important to consider when producing test specimens that are smaller than the production part (i.e., property-performance relationships).
Cost-effectiveness of point-of-care testing for dehydration in the pediatric ED.
Whitney, Rachel E; Santucci, Karen; Hsiao, Allen; Chen, Lei
2016-08-01
Acute gastroenteritis (AGE) and subsequent dehydration account for a large proportion of pediatric emergency department (PED) visits. Point-of-care (POC) testing has been used in conjunction with clinical assessment to determine the degree of dehydration. Despite the wide acceptance of POC testing, little formal cost-effective analysis of POC testing in the PED exists. We aim to examine the cost-effectiveness of using POC electrolyte testing vs traditional serum chemistry testing in the PED for children with AGE. This was a cost-effective analysis using data from a randomized control trial of children with AGE. A decision analysis model was constructed to calculate cost-savings from the point of view of the payer and the provider. We used parameters obtained from the trial, including cost of testing, admission rates, cost of admission, and length of stay. Sensitivity analyses were performed to evaluate the stability of our model. Using the data set of 225 subjects, POC testing results in a cost savings of $303.30 per patient compared with traditional serum testing from the point of the view of the payer. From the point-of-view of the provider, POC testing results in consistent mean savings of $36.32 ($8.29-$64.35) per patient. Sensitivity analyses demonstrated the stability of the model and consistent savings. This decision analysis provides evidence that POC testing in children with gastroenteritis-related moderate dehydration results in significant cost savings from the points of view of payers and providers compared to traditional serum chemistry testing. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego
2015-12-01
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Lee, Sangyeop; Choo, Jaebum
2016-06-01
A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner.A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07243c
Occupancy estimation and the closure assumption
Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.
2009-01-01
1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.
Examining the intersection of sex and stress in modelling neuropsychiatric disorders.
Goel, N; Bale, T L
2009-03-01
Sex-biased neuropsychiatric disorders, including major depressive disorder and schizophrenia, are the major cause of disability in the developed world. Elevated stress sensitivity has been proposed as a key underlying factor in disease onset. Sex differences in stress sensitivity are associated with corticotrophin-releasing factor (CRF) and serotonin neurotransmission, which are important central regulators of mood and coping responses. To elucidate the underlying neurobiology of stress-related disease predisposition, it is critical to develop appropriate animal models of stress pathway dysregulation. Furthermore, the inclusion of sex difference comparisons in stress responsive behaviours, physiology and central stress pathway maturation in these models is essential. Recent studies by our laboratory and others have begun to investigate the intersection of stress and sex where the development of mouse models of stress pathway dysregulation via prenatal stress experience or early-life manipulations has provided insight into points of developmental vulnerability. In addition, examination of the maturation of these pathways, including the functional importance of the organisational and activational effects of gonadal hormones on stress responsivity, is essential for determination of when sex differences in stress sensitivity may begin. In such studies, we have detected distinct sex differences in stress coping strategies where activational effects of testosterone produced females that displayed male-like strategies in tests of passive coping, but were similar to females in tests of active coping. In a second model of elevated stress sensitivity, male mice experiencing prenatal stress early in gestation showed feminised physiological and behavioural stress responses, and were highly sensitive to a low dose of selective serotonin reuptake inhibitors. Analyses of expression and epigenetic patterns revealed changes in CRF and glucocorticoid receptor genes in these mice. Mechanistically, stress early in pregnancy produced a significant sex-dependent effect on placental gene expression that was supportive of altered foetal transport of key growth factors and nutrients. These mouse models examining alterations and hormonal effects on development of stress pathways provide necessary insight into how specific stress responses can be reprogrammed early in development resulting in sex differences in stress sensitivity and neuropsychiatric disease vulnerability.
Examining the intersection of sex and stress in modeling neuropsychiatric disorders
Goel, Nirupa; Bale, Tracy L.
2009-01-01
Sex-biased neuropsychiatric disorders, including major depressive disorder and schizophrenia, are the major cause of disability in the developed world. Elevated stress sensitivity has been proposed as a key underlying factor in disease onset. Sex differences in stress sensitivity are associated with CRF and serotonin neurotransmission, important central regulators of mood and coping responses. To elucidate the underlying neurobiology of stress-related disease predisposition, it is critical to develop appropriate animal models of stress pathway dysregulation. Further, the inclusion of sex difference comparisons in stress responsive behaviors, physiology, and central stress pathway maturation in these models is essential. Recent studies by our lab and others have begun to investigate the intersection of stress and sex where the development of mouse models of stress pathway dysregulation via prenatal stress experience or early life manipulations has provided insight into points of developmental vulnerability. In addition, examination of the maturation of these pathways including the functional importance of the organizational and activational effects of gonadal hormones on stress responsivity is essential for determination of when sex differences in stress sensitivity may begin. In such studies, we have detected distinct sex differences in stress coping strategies where activational effects of testosterone produced females that displayed male-like strategies in tests of passive coping, but were similar to females in tests of active coping. In a second model of elevated stress sensitivity, male mice experiencing prenatal stress early in gestation showed feminized physiological and behavioral stress responses, and were highly sensitive to a low dose of SSRI. Analyses of expression and epigenetic patterns revealed changes in CRF and glucocorticoid receptor genes in these mice. Mechanistically, stress early in pregnancy produced a significant sex-dependent effect on placental gene expression supportive of altered fetal transport of key growth factors and nutrients. These mouse models examining alterations and hormonal effects on development of stress pathways provide necessary insight into how specific stress responses can be reprogrammed early in development resulting in sex differences in stress sensitivity and neuropsychiatric disease vulnerability. PMID:19187468
Dusenberry, Michael W; Brown, Charles K; Brewer, Kori L
2017-02-01
To construct an artificial neural network (ANN) model that can predict the presence of acute CT findings with both high sensitivity and high specificity when applied to the population of patients≥age 65years who have incurred minor head injury after a fall. An ANN was created in the Python programming language using a population of 514 patients ≥ age 65 years presenting to the ED with minor head injury after a fall. The patient dataset was divided into three parts: 60% for "training", 20% for "cross validation", and 20% for "testing". Sensitivity, specificity, positive and negative predictive values, and accuracy were determined by comparing the model's predictions to the actual correct answers for each patient. On the "cross validation" data, the model attained a sensitivity ("recall") of 100.00%, specificity of 78.95%, PPV ("precision") of 78.95%, NPV of 100.00%, and accuracy of 88.24% in detecting the presence of positive head CTs. On the "test" data, the model attained a sensitivity of 97.78%, specificity of 89.47%, PPV of 88.00%, NPV of 98.08%, and accuracy of 93.14% in detecting the presence of positive head CTs. ANNs show great potential for predicting CT findings in the population of patients ≥ 65 years of age presenting with minor head injury after a fall. As a good first step, the ANN showed comparable sensitivity, predictive values, and accuracy, with a much higher specificity than the existing decision rules in clinical usage for predicting head CTs with acute intracranial findings. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hyer, M. W.
1980-01-01
The determination of the stress distribution in the inner lap of double-lap, double-bolt joints using photoelastic models of the joint is discussed. The principal idea is to fabricate the inner lap of a photoelastic material and to use a photoelastically sensitive material for the two outer laps. With this setup, polarized light transmitted through the stressed model responds principally to the stressed inner lap. The model geometry, the procedures for making and testing the model, and test results are described.
Explosive response model evaluation using the explosive H6
NASA Astrophysics Data System (ADS)
Sutherland, Gerrit T.; Burns, Joseph
2000-04-01
Reactive rate model parameters for a two term Lee Tarver [simplified ignition and growth (SIG)] model were obtained for the explosive H6 from modified gap test data. These model was used to perform simulations of the underwater sensitivity test (UST) using the CTH hydrocode. Reaction was predicted in the simulations for the same water gaps that reaction was observed in the UST. The expansions observed for the UST samples were not simulated correctly, and this is attributed to the density equilibrium conditions imposed between unreacted and reacted components in CTH for the Lee-Tarver model.
Test of a geometric model for the modification stage of simple impact crater development
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Coderre, J. M.; Rupert, J.; Garvin, J. B.
1989-01-01
This paper presents a geometric model describing the geometry of the transient cavity of an impact crater and the subsequent collapse of its walls to form a crater filled by an interior breccia lens. The model is tested by comparing the volume of slump material calculated from known dimensional parameters with the volume of the breccia lens estimated on the basis of observational data. Results obtained from the model were found to be consistent with observational data, particularly in view of the highly sensitive nature of the model to input parameters.
Verma, Rajeshwar P; Matthews, Edwin J
2015-03-01
This is part II of an in silico investigation of chemical-induced eye injury that was conducted at FDA's CFSAN. Serious eye damage caused by chemical (eye corrosion) is assessed using the rabbit Draize test, and this endpoint is an essential part of hazard identification and labeling of industrial and consumer products to ensure occupational and consumer safety. There is an urgent need to develop an alternative to the Draize test because EU's 7th amendment to the Cosmetic Directive (EC, 2003; 76/768/EEC) and recast Regulation now bans animal testing on all cosmetic product ingredients and EU's REACH Program limits animal testing for chemicals in commerce. Although in silico methods have been reported for eye irritation (reversible damage), QSARs specific for eye corrosion (irreversible damage) have not been published. This report describes the development of 21 ANN c-QSAR models (QSAR-21) for assessing eye corrosion potential of chemicals using a large and diverse CFSAN data set of 504 chemicals, ADMET Predictor's three sensitivity analyses and ANNE classification functionalities with 20% test set selection from seven different methods. QSAR-21 models were internally and externally validated and exhibited high predictive performance: average statistics for the training, verification, and external test sets of these models were 96/96/94% sensitivity and 91/91/90% specificity. Copyright © 2014 Elsevier Inc. All rights reserved.
Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li
2014-01-01
Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.
Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho
2017-08-01
The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.
Boehnke, Mitchell; Patel, Nayana; McKinney, Kristin; Clark, Toshimasa
The Society of Radiologists in Ultrasound (SRU 2005) and American Thyroid Association (ATA 2009 and ATA 2015) have published algorithms regarding thyroid nodule management. Kwak et al. and other groups have described models that estimate thyroid nodules' malignancy risk. The aim of our study is to use Kwak's model to evaluate the tradeoffs of both sensitivity and specificity of SRU 2005, ATA 2009 and ATA 2015 management algorithms. 1,000,000 thyroid nodules were modeled in MATLAB. Ultrasound characteristics were modeled after published data. Malignancy risk was estimated per Kwak's model and assigned as a binary variable. All nodules were then assessed using the published management algorithms. With the malignancy variable as condition positivity and algorithms' recommendation for FNA as test positivity, diagnostic performance was calculated. Modeled nodule characteristics mimic those of Kwak et al. 12.8% nodules were assigned as malignant (malignancy risk range of 2.0-98%). FNA was recommended for 41% of nodules by SRU 2005, 66% by ATA 2009, and 82% by ATA 2015. Sensitivity and specificity is significantly different (< 0.0001): 49% and 60% for SRU; 81% and 36% for ATA 2009; and 95% and 20% for ATA 2015. SRU 2005, ATA 2009 and ATA 2015 algorithms are used routinely in clinical practice to determine whether thyroid nodule biopsy is indicated. We demonstrate significant differences in these algorithms' diagnostic performance, which result in a compromise between sensitivity and specificity. Copyright © 2017 Elsevier Inc. All rights reserved.
Rollinson, Christine R; Liu, Yao; Raiho, Ann; Moore, David J P; McLachlan, Jason; Bishop, Daniel A; Dye, Alex; Matthes, Jaclyn H; Hessl, Amy; Hickler, Thomas; Pederson, Neil; Poulter, Benjamin; Quaife, Tristan; Schaefer, Kevin; Steinkamp, Jörg; Dietze, Michael C
2017-07-01
Ecosystem models show divergent responses of the terrestrial carbon cycle to global change over the next century. Individual model evaluation and multimodel comparisons with data have largely focused on individual processes at subannual to decadal scales. Thus far, data-based evaluations of emergent ecosystem responses to climate and CO 2 at multidecadal and centennial timescales have been rare. We compared the sensitivity of net primary productivity (NPP) to temperature, precipitation, and CO 2 in ten ecosystem models with the sensitivities found in tree-ring reconstructions of NPP and raw ring-width series at six temperate forest sites. These model-data comparisons were evaluated at three temporal extents to determine whether the rapid, directional changes in temperature and CO 2 in the recent past skew our observed responses to multiple drivers of change. All models tested here were more sensitive to low growing season precipitation than tree-ring NPP and ring widths in the past 30 years, although some model precipitation responses were more consistent with tree rings when evaluated over a full century. Similarly, all models had negative or no response to warm-growing season temperatures, while tree-ring data showed consistently positive effects of temperature. Although precipitation responses were least consistent among models, differences among models to CO 2 drive divergence and ensemble uncertainty in relative change in NPP over the past century. Changes in forest composition within models had no effect on climate or CO 2 sensitivity. Fire in model simulations reduced model sensitivity to climate and CO 2 , but only over the course of multiple centuries. Formal evaluation of emergent model behavior at multidecadal and multicentennial timescales is essential to reconciling model projections with observed ecosystem responses to past climate change. Future evaluation should focus on improved representation of disturbance and biomass change as well as the feedbacks with moisture balance and CO 2 in individual models. © 2017 John Wiley & Sons Ltd.
Finite element study of human pelvis model in side impact for Chinese adult occupants.
Ma, Zhengwei; Lan, Fengchong; Chen, Jiqing; Liu, Weiguo
2015-01-01
The occupant's pelvis is very vulnerable to side collision in road accidents. Finite element (FE) studies on pelvic injury help to design occupant protection devices to improve vehicle safety. This study was aimed to develop a highly biofidelic pelvis model of Chinese adults and assess its sensitivity to variations in pelvis cortical bone thickness, bone material properties, and loading conditions. In this study, 4 different FE models of the pelvis were developed from the computed tomography (CT) data of a volunteer representing the 50th percentile Chinese male. Two of them were meshed using entirely hexahedral elements with variable and constant cortical thickness distribution (the V-Hex and C-Hex models), and the others were modeled with hexahedral elements for cancellous bone and variable or constant thickness shell elements for cortical bone (the V-HS and C-HS models). In model developments, the semi-automatic multiblock meshing approach was employed to maintain the pelvis geometric curvature and generate a high-quality hexahedral mesh. Then, several simulations with postmortem human subjects (PMHS) tests were performed to obtain the most accurate model in predicting pelvic injury. Based on the most accurate model, sensitivity studies were conducted to analyze the effects of the cortex thickness, Young's modulus of the cortical and cancellous bone, impactor velocity, and impactor with or without padding on the biomechanical responses and injuries of pelvis. The results indicate that the models with variable cortical bone thickness can give more accurate predictions than those with constant cortical thickness. Both the V-Hex and V-HS models are favorable for simulating pelvic response and injury, but the simulation results of the V-Hex model agree with the tests better. The sensitivity study shows that pelvic response is more sensitive to alterations in the Young's modulus of cortical bone than cancellous bone. Compared to failure displacement, peak force is more sensitive to the cortical bone thickness. However, displacement is more sensitive to the Young's modulus of cancellous bone than peak force. The padding attached on the impactor plays a significant role in absorbing the impact energy and alleviating pelvic injury. The all-hex meshing method with variable cortical bone thickness has the highest accuracy but is time-consuming. The cortical bone plays a determining role in resisting pelvic fracture. Peak impact force appears to be a reasonable injury predictor for pelvic injury assessment. Some appropriate energy absorbers installed in the car door can significantly reduce pelvic injury and will be beneficial for occupant protection.
NASA Technical Reports Server (NTRS)
Lim, J. T.; Wilkerson, G. G.; Raper, C. D. Jr; Gold, H. J.
1990-01-01
A differential equation model of vegetative growth of the soya bean plant (Glycine max (L.) Merrill cv. Ransom') was developed to account for plant growth in a phytotron system under variation of root temperature and nitrogen concentration in nutrient solution. The model was tested by comparing model outputs with data from four different experiments. Model predictions agreed fairly well with measured plant performance over a wide range of root temperatures and over a range of nitrogen concentrations in nutrient solution between 0.5 and 10.0 mmol NO3- in the phytotron environment. Sensitivity analyses revealed that the model was most sensitive to changes in parameters relating to carbohydrate concentration in the plant and nitrogen uptake rate.
Bayesian Latent Class Models in Malaria Diagnosis
Gonçalves, Luzia; Subtil, Ana; de Oliveira, M. Rosário; do Rosário, Virgílio; Lee, Pei-Wen; Shaio, Men-Fang
2012-01-01
Aims The main focus of this study is to illustrate the importance of the statistical analysis in the evaluation of the accuracy of malaria diagnostic tests, without admitting a reference test, exploring a dataset (3317) collected in São Tomé and Príncipe. Methods Bayesian Latent Class Models (without and with constraints) are used to estimate the malaria infection prevalence, together with sensitivities, specificities, and predictive values of three diagnostic tests (RDT, Microscopy and PCR), in four subpopulations simultaneously based on a stratified analysis by age groups (, 5 years old) and fever status (febrile, afebrile). Results In the afebrile individuals with at least five years old, the posterior mean of the malaria infection prevalence is 3.2% with a highest posterior density interval of [2.3–4.1]. The other three subpopulations (febrile 5 years, afebrile or febrile children less than 5 years) present a higher prevalence around 10.3% [8.8–11.7]. In afebrile children under-five years old, the sensitivity of microscopy is 50.5% [37.7–63.2]. In children under-five, the estimated sensitivities/specificities of RDT are 95.4% [90.3–99.5]/93.8% [91.6–96.0] – afebrile – and 94.1% [87.5–99.4]/97.5% [95.5–99.3] – febrile. In individuals with at least five years old are 96.0% [91.5–99.7]/98.7% [98.1–99.2] – afebrile – and 97.9% [95.3–99.8]/97.7% [96.6–98.6] – febrile. The PCR yields the most reliable results in four subpopulations. Conclusions The utility of this RDT in the field seems to be relevant. However, in all subpopulations, data provide enough evidence to suggest caution with the positive predictive values of the RDT. Microscopy has poor sensitivity compared to the other tests, particularly, in the afebrile children less than 5 years. This type of findings reveals the danger of statistical analysis based on microscopy as a reference test. Bayesian Latent Class Models provide a powerful tool to evaluate malaria diagnostic tests, taking into account different groups of interest. PMID:22844405
Mitchell, Dominic; Guertin, Jason R; Dubois, Anick; Dubé, Marie-Pierre; Tardif, Jean-Claude; Iliza, Ange Christelle; Fanton-Aita, Fiorella; Matteau, Alexis; LeLorier, Jacques
2018-04-01
Statin (HMG-CoA reductase inhibitor) therapy is the mainstay dyslipidemia treatment and reduces the risk of a cardiovascular (CV) event (CVE) by up to 35%. However, adherence to statin therapy is poor. One reason patients discontinue statin therapy is musculoskeletal pain and the associated risk of rhabdomyolysis. Research is ongoing to develop a pharmacogenomics (PGx) test for statin-induced myopathy as an alternative to the current diagnosis method, which relies on creatine kinase levels. The potential economic value of a PGx test for statin-induced myopathy is unknown. We developed a lifetime discrete event simulation (DES) model for patients 65 years of age initiating a statin after a first CVE consisting of either an acute myocardial infarction (AMI) or a stroke. The model evaluates the potential economic value of a hypothetical PGx test for diagnosing statin-induced myopathy. We have assessed the model over the spectrum of test sensitivity and specificity parameters. Our model showed that a strategy with a perfect PGx test had an incremental cost-utility ratio of 4273 Canadian dollars ($Can) per quality-adjusted life year (QALY). The probabilistic sensitivity analysis shows that when the payer willingness-to-pay per QALY reaches $Can12,000, the PGx strategy is favored in 90% of the model simulations. We found that a strategy favoring patients staying on statin therapy is cost effective even if patients maintained on statin are at risk of rhabdomyolysis. Our results are explained by the fact that statins are highly effective in reducing the CV risk in patients at high CV risk, and this benefit largely outweighs the risk of rhabdomyolysis.
The cost-effectiveness of screening for colorectal cancer.
Telford, Jennifer J; Levy, Adrian R; Sambrook, Jennifer C; Zou, Denise; Enns, Robert A
2010-09-07
Published decision analyses show that screening for colorectal cancer is cost-effective. However, because of the number of tests available, the optimal screening strategy in Canada is unknown. We estimated the incremental cost-effectiveness of 10 strategies for colorectal cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and benefits of chemotherapy. We used a probabilistic Markov model to estimate the costs and quality-adjusted life expectancy of 50-year-old average-risk Canadians without screening and with screening by each test. We populated the model with data from the published literature. We calculated costs from the perspective of a third-party payer, with inflation to 2007 Canadian dollars. Of the 10 strategies considered, we focused on three tests currently being used for population screening in some Canadian provinces: low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 44%, 65% and 81%, and mortality by 55%, 74% and 83%, respectively, compared with no screening. These strategies generated incremental cost-effectiveness ratios of $9159, $611 and $6133 per quality-adjusted life year, respectively. The findings were robust to probabilistic sensitivity analysis. Colonoscopy every 10 years yielded the greatest net health benefit. Screening for colorectal cancer is cost-effective over conventional levels of willingness to pay. Annual high-sensitivity fecal occult blood testing, such as a fecal immunochemical test, or colonoscopy every 10 years offer the best value for the money in Canada.
STREAM TEMPERATURE SIMULATION OF FORESTED RIPARIAN AREAS: II. MODEL APPLICATION
The SHADE-HSPF modeling system described in a companion paper has been tested and applied to the Upper Grande Ronde (UGR) watershed in northeast Oregon. Sensitivities of stream temperature to the heat balance parameters in Hydrologic Simulation Program-FORTRAN (HSPF) and the ripa...
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Diagnostics of boundary layer transition by shear stress sensitive liquid crystals
NASA Astrophysics Data System (ADS)
Shapoval, E. S.
2016-10-01
Previous research indicates that the problem of boundary layer transition visualization on metal models in wind tunnels (WT) which is a fundamental question in experimental aerodynamics is not solved yet. In TsAGI together with Khristianovich Institute of Theoretical and Applied Mechanics (ITAM) a method of shear stress sensitive liquid crystals (LC) which allows flow visualization was proposed. This method allows testing several flow conditions in one wind tunnel run and does not need covering the investigated model with any special heat-insulating coating which spoils the model geometry. This coating is easily applied on the model surface by spray or even by brush. Its' thickness is about 40 micrometers and it does not spoil the surface quality. At first the coating obtains some definite color. Under shear stress the LC coating changes color and this change is proportional to shear stress. The whole process can be visually observed and during the tests it is recorded by camera. The findings of the research showed that it is possible to visualize boundary layer transition, flow separation, shock waves and the flow image on the whole. It is possible to predict that the proposed method of shear stress sensitive liquid crystals is a promise for future research.
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
Analyzing the sensitivity of a flood risk assessment model towards its input data
NASA Astrophysics Data System (ADS)
Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene
2016-11-01
The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.
Weight-elimination neural networks applied to coronary surgery mortality prediction.
Ennett, Colleen M; Frize, Monique
2003-06-01
The objective was to assess the effectiveness of the weight-elimination cost function in improving classification performance of artificial neural networks (ANNs) and to observe how changing the a priori distribution of the training set affects network performance. Backpropagation feedforward ANNs with and without weight-elimination estimated mortality for coronary artery surgery patients. The ANNs were trained and tested on cases with 32 input variables describing the patient's medical history; the output variable was in-hospital mortality (mortality rates: training 3.7%, test 3.8%). Artificial training sets with mortality rates of 20%, 50%, and 80% were created to observe the impact of training with a higher-than-normal prevalence. When the results were averaged, weight-elimination networks achieved higher sensitivity rates than those without weight-elimination. Networks trained on higher-than-normal prevalence achieved higher sensitivity rates at the cost of lower specificity and correct classification. The weight-elimination cost function can improve the classification performance when the network is trained with a higher-than-normal prevalence. A network trained with a moderately high artificial mortality rate (artificial mortality rate of 20%) can improve the sensitivity of the model without significantly affecting other aspects of the model's performance. The ANN mortality model achieved comparable performance as additive and statistical models for coronary surgery mortality estimation in the literature.
Dowling, N Maritza; Bolt, Daniel M; Deng, Sien
2016-12-01
When assessments are primarily used to measure change over time, it is important to evaluate items according to their sensitivity to change, specifically. Items that demonstrate good sensitivity to between-person differences at baseline may not show good sensitivity to change over time, and vice versa. In this study, we applied a longitudinal factor model of change to a widely used cognitive test designed to assess global cognitive status in dementia, and contrasted the relative sensitivity of items to change. Statistically nested models were estimated introducing distinct latent factors related to initial status differences between test-takers and within-person latent change across successive time points of measurement. Models were estimated using all available longitudinal item-level data from the Alzheimer's Disease Assessment Scale-Cognitive subscale, including participants representing the full-spectrum of disease status who were enrolled in the multisite Alzheimer's Disease Neuroimaging Initiative. Five of the 13 Alzheimer's Disease Assessment Scale-Cognitive items demonstrated noticeably higher loadings with respect to sensitivity to change. Attending to performance change on only these 5 items yielded a clearer picture of cognitive decline more consistent with theoretical expectations in comparison to the full 13-item scale. Items that show good psychometric properties in cross-sectional studies are not necessarily the best items at measuring change over time, such as cognitive decline. Applications of the methodological approach described and illustrated in this study can advance our understanding regarding the types of items that best detect fine-grained early pathological changes in cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Laboratory and field testing of commercial rotational seismometers
Nigbor, R.L.; Evans, J.R.; Hutt, C.R.
2009-01-01
There are a small number of commercially available sensors to measure rotational motion in the frequency and amplitude ranges appropriate for earthquake motions on the ground and in structures. However, the performance of these rotational seismometers has not been rigorously and independently tested and characterized for earthquake monitoring purposes as is done for translational strong- and weak-motion seismometers. Quantities such as sensitivity, frequency response, resolution, and linearity are needed for the understanding of recorded rotational data. To address this need, we, with assistance from colleagues in the United States and Taiwan, have been developing performance test methodologies and equipment for rotational seismometers. In this article the performance testing methodologies are applied to samples of a commonly used commercial rotational seismometer, the eentec model R-1. Several examples were obtained for various test sequences in 2006, 2007, and 2008. Performance testing of these sensors consisted of measuring: (1) sensitivity and frequency response; (2) clip level; (3) self noise and resolution; and (4) cross-axis sensitivity, both rotational and translational. These sensor-specific results will assist in understanding the performance envelope of the R-1 rotational seismometer, and the test methodologies can be applied to other rotational seismometers.
O'Connell, Thomas F; Carpenter, Patrick S; Caballero, Nadia; Putnam, Andrew J; Steere, Joshua T; Matz, Gregory J; Foecking, Eileen M
2014-01-01
Vicodin, the combination drug of acetaminophen and the opioid hydrocodone, is one of the most prescribed drugs on the market today. Opioids have demonstrated the ability to paradoxically cause increased pain sensitivity to users in a phenomena called opioid-induced hyperalgesia (OIH). While selected opioids have been shown to produce OIH symptoms in an animal model, hydrocodone and the combination drug Vicodin have yet to be studied. The purpose of this study was to explore the effect of exposure to chronic high dose Vicodin or its components on the sensitivity to both thermal and mechanical pain. Animals were randomly divided into 4 groups, Vicodin, acetaminophen, hydrocodone, or vehicle control, and administered the drug daily for 120 days. Rats were subsequently tested for thermal and mechanical sensitivity. The rats in the Vicodin group displayed a significant decrease in withdrawal time to thermal pain. The rats receiving acetaminophen, hydrocodone, and vehicle showed no statistically significant hypersensitivity in thermal testing. None of the groups demonstrated statistically significant hypersensitivity to mechanical testing. The data suggests Vicodin produces signs of OIH in a rodent model. However, increased pain sensitivity was only noted in the thermal pathway and the hypersensitivity was only seen with the opioid combination drug, not the opioid alone. The results of this study both support the results of previous rodent opioid studies while generating further questions about the specific properties of Vicodin that contribute to pain hypersensitivity. The growing use of Vicodin to treat chronic pain necessitates further research looking into this paradoxical pain response.
Martín-Sánchez, Ana; Valera-Marín, Guillermo; Hernández-Martínez, Adoración; Lanuza, Enrique; Martínez-García, Fernando; Agustín-Pavón, Carmen
2015-01-01
Virgin adult female mice display nearly spontaneous maternal care towards foster pups after a short period of sensitization. This indicates that maternal care is triggered by sensory stimulation provided by the pups and that its onset is largely independent on the physiological events related to gestation, parturition and lactation. Conversely, the factors influencing maternal aggression are poorly understood. In this study, we sought to characterize two models of maternal sensitization in the outbred CD1 strain. To do so, a group of virgin females (godmothers) were exposed to continuous cohabitation with a lactating dam and their pups from the moment of parturition, whereas a second group (pup-sensitized females), were exposed 2 h daily to foster pups. Both groups were tested for maternal behavior on postnatal days 2-4. Godmothers expressed full maternal care from the first test. Also, they expressed higher levels of crouching than dams. Pup-sensitized females differed from dams in all measures of pup-directed behavior in the first test, and expressed full maternal care after two sessions of contact with pups. However, both protocols failed to induce maternal aggression toward a male intruder after full onset of pup-directed maternal behavior, even in the presence of pups. Our study confirms that adult female mice need a short sensitization period before the onset of maternal care. Further, it shows that pup-oriented and non-pup-oriented components of maternal behavior are under different physiological control. We conclude that the godmother model might be useful to study the physiological and neural bases of the maternal behavior repertoire.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Detecting potential impacts of deep subsurface CO2 injection on shallow drinking water
NASA Astrophysics Data System (ADS)
Smyth, R. C.; Yang, C.; Romanak, K.; Mickler, P. J.; Lu, J.; Hovorka, S. D.
2012-12-01
Presented here are results from one aspect of collective research conducted at Gulf Coast Carbon Center, BEG, Jackson School at UT Austin. The biggest hurdle to public acceptance of CCS is to show that drinking water resources will not be impacted. Since late 1990s our group has been supported by US DOE NETL and private industry to research how best to detect potential impacts to shallow (0 to ~0.25 km) subsurface drinking water from deep (~1 to 3.5 km) injection of CO2. Work has and continues to include (1) field sampling and testing, (2) laboratory batch experiments, (3) geochemical modeling. The objective has been to identify the most sensitive geochemical indicators using data from research-level investigations, which can be economically applied on an industrial-scale. The worst-case scenario would be introduction of CO2 directly into drinking water from a leaking wellbore at a brownfield site. This is unlikely for a properly screened and/or maintained site, but needs to be considered. Our results show aquifer matrix (carbonate vs. clastic) to be critical to interpretation of pH and carbonate (DIC, Alkalinity, and δ13C of DIC) parameters because of the influence of water-rock reaction (buffering vs. non-buffering) on aqueous geochemistry. Field groundwater sampling sites to date are Cranfield, MS and SACROC, TX CO2-EOR oilfields. Two major aquifer types are represented, one dominated by silicate (Cranfield) and the other by carbonate (SACROC) water-rock reactions. We tested sensitivity of geochemical indicators (pH, DIC, Alkalinity, and δ13C of DIC) by modeling the effects of increasing pCO2 on aqueous geochemistry, and laboratory batch experiments, both with partial pressure of CO2 gas (pCO2) at 1x105 Pa (1 atm). Aquifer matrix and groundwater data provided constraints for the geochemical models. We used results from modeling and batch experiments to rank geochemical parameter sensitivity to increased pCO2 into weakly, mildly and strongly sensitive categories for both aquifer systems. DIC concentration is strongly sensitive to increased pCO2 for both aquifers; however, CO2 outgassing during sampling complicates direct field measurement of DIC. Interpretation of data from in-situ push-pull aquifer tests is ongoing and will be used to augment results summarized here. We are currently designing groundwater monitoring plans for two additional industrial-scale sites where we will further test the sensitivity and utility of our sampling approach.
Karolemeas, Katerina; de la Rua-Domenech, Ricardo; Cooper, Roderick; Goodchild, Anthony V; Clifton-Hadley, Richard S; Conlan, Andrew J K; Mitchell, Andrew P; Hewinson, R Glyn; Donnelly, Christl A; Wood, James L N; McKinley, Trevelyan J
2012-01-01
Bovine tuberculosis (bTB) is one of the most serious economic animal health problems affecting the cattle industry in Great Britain (GB), with incidence in cattle herds increasing since the mid-1980s. The single intradermal comparative cervical tuberculin (SICCT) test is the primary screening test in the bTB surveillance and control programme in GB and Ireland. The sensitivity (ability to detect infected cattle) of this test is central to the efficacy of the current testing regime, but most previous studies that have estimated test sensitivity (relative to the number of slaughtered cattle with visible lesions [VL] and/or positive culture results) lacked post-mortem data for SICCT test-negative cattle. The slaughter of entire herds ("whole herd slaughters" or "depopulations") that are infected by bTB are occasionally conducted in GB as a last-resort control measure to resolve intractable bTB herd breakdowns. These provide additional post-mortem data for SICCT test-negative cattle, allowing a rare opportunity to calculate the animal-level sensitivity of the test relative to the total number of SICCT test-positive and negative VL animals identified post-mortem (rSe). In this study, data were analysed from 16 whole herd slaughters (748 SICCT test-positive and 1031 SICCT test-negative cattle) conducted in GB between 1988 and 2010, using a bayesian hierarchical model. The overall rSe estimate of the SICCT test at the severe interpretation was 85% (95% credible interval [CI]: 78-91%), and at standard interpretation was 81% (95% CI: 70-89%). These estimates are more robust than those previously reported in GB due to inclusion of post-mortem data from SICCT test-negative cattle.
Karolemeas, Katerina; de la Rua-Domenech, Ricardo; Cooper, Roderick; Goodchild, Anthony V.; Clifton-Hadley, Richard S.; Conlan, Andrew J. K.; Mitchell, Andrew P.; Hewinson, R. Glyn; Donnelly, Christl A.; Wood, James L. N.; McKinley, Trevelyan J.
2012-01-01
Bovine tuberculosis (bTB) is one of the most serious economic animal health problems affecting the cattle industry in Great Britain (GB), with incidence in cattle herds increasing since the mid-1980s. The single intradermal comparative cervical tuberculin (SICCT) test is the primary screening test in the bTB surveillance and control programme in GB and Ireland. The sensitivity (ability to detect infected cattle) of this test is central to the efficacy of the current testing regime, but most previous studies that have estimated test sensitivity (relative to the number of slaughtered cattle with visible lesions [VL] and/or positive culture results) lacked post-mortem data for SICCT test-negative cattle. The slaughter of entire herds (“whole herd slaughters” or “depopulations”) that are infected by bTB are occasionally conducted in GB as a last-resort control measure to resolve intractable bTB herd breakdowns. These provide additional post-mortem data for SICCT test-negative cattle, allowing a rare opportunity to calculate the animal-level sensitivity of the test relative to the total number of SICCT test-positive and negative VL animals identified post-mortem (rSe). In this study, data were analysed from 16 whole herd slaughters (748 SICCT test-positive and 1031 SICCT test-negative cattle) conducted in GB between 1988 and 2010, using a Bayesian hierarchical model. The overall rSe estimate of the SICCT test at the severe interpretation was 85% (95% credible interval [CI]: 78–91%), and at standard interpretation was 81% (95% CI: 70–89%). These estimates are more robust than those previously reported in GB due to inclusion of post-mortem data from SICCT test-negative cattle. PMID:22927952
Dowdy, David W; Steingart, Karen R; Pai, Madhukar
2011-08-01
Undiagnosed and misdiagnosed tuberculosis (TB) drives the epidemic in India. Serological (antibody detection) TB tests are not recommended by any agency, but widely used in many countries, including the Indian private sector. The cost and impact of using serology compared with other diagnostic techniques is unknown. Taking a patient cohort conservatively equal to the annual number of serological tests done in India (1.5 million adults suspected of having active TB), we used decision analysis to estimate costs and effectiveness of sputum smear microscopy (US$3.62 for two smears), microscopy plus automated liquid culture (mycobacterium growth indicator tube [MGIT], US$20/test), and serological testing (anda-tb ELISA, US$20/test). Data on test accuracy and costs were obtained from published literature. We adopted the perspective of the Indian TB control sector and an analysis frame of 1 year. Our primary outcome was the incremental cost per disability-adjusted life year (DALY) averted. We performed one-way sensitivity analysis on all model parameters, with multiway sensitivity analysis on variables to which the model was most sensitive. If used instead of sputum microscopy, serology generated an estimated 14,000 more TB diagnoses, but also 121,000 more false-positive diagnoses, 102,000 fewer DALYs averted, and 32,000 more secondary TB cases than microscopy, at approximately four times the incremental cost (US$47.5 million versus US$11.9 million). When added to high-quality sputum smears, MGIT culture was estimated to avert 130,000 incremental DALYs at an incremental cost of US$213 per DALY averted. Serology was dominated by (i.e., more costly and less effective than) MGIT culture and remained less economically favorable than sputum smear or TB culture in one-way and multiway sensitivity analyses. In India, sputum smear microscopy remains the most cost-effective diagnostic test available for active TB; efforts to increase access to quality-assured microscopy should take priority. In areas where high-quality microscopy exists and resources are sufficient, MGIT culture is more cost-effective than serology as an additional diagnostic test for TB. These data informed a recently published World Health Organization policy statement against serological tests.
NASA Astrophysics Data System (ADS)
Kao, S. C.; Naz, B. S.; Gangrade, S.; Ashfaq, M.; Rastogi, D.
2016-12-01
The magnitude and frequency of hydroclimate extremes are projected to increase in the conterminous United States (CONUS) with significant implications for future water resource planning and flood risk management. Nevertheless, apart from the change of natural environment, the choice of model spatial resolution could also artificially influence the features of simulated extremes. To better understand how the spatial resolution of meteorological forcings may affect hydroclimate projections, we test the runoff sensitivity using the Variable Infiltration Capacity (VIC) model that was calibrated for each CONUS 8-digit hydrologic unit (HUC8) at 1/24° ( 4km) grid resolution. The 1980-2012 gridded Daymet and PRISM meteorological observations are used to conduct the 1/24° resolution control simulation. Comparative simulations are achieved by smoothing the 1/24° forcing into 1/12° and 1/8° resolutions which are then used to drive the VIC model for the CONUS. In addition, we also test how the simulated high and low runoff conditions would react to change in precipitation (±10%) and temperature (+1°C). The results are further analyzed for various types of hydroclimate extremes across different watersheds in the CONUS. This work helps us understand the sensitivity of simulated runoff to different spatial resolutions of climate forcings and also its sensitivity to different watershed sizes and characteristics of extreme events in the future climate conditions.
Stenehjem, David D; Bellows, Brandon K; Yager, Kraig M; Jones, Joshua; Kaldate, Rajesh; Siebert, Uwe; Brixner, Diana I
2016-02-01
A prognostic test was developed to guide adjuvant chemotherapy (ACT) decisions in early-stage non-small cell lung cancer (NSCLC) adenocarcinomas. The objective of this study was to compare the cost-utility of the prognostic test to the current standard of care (SoC) in patients with early-stage NSCLC. Lifetime costs (2014 U.S. dollars) and effectiveness (quality-adjusted life-years [QALYs]) of ACT treatment decisions were examined using a Markov microsimulation model from a U.S. third-party payer perspective. Cancer stage distribution and probability of receiving ACT with the SoC were based on data from an academic cancer center. The probability of receiving ACT with the prognostic test was estimated from a physician survey. Risk classification was based on the 5-year predicted NSCLC-related mortality. Treatment benefit with ACT was based on the prognostic score. Discounting at a 3% annual rate was applied to costs and QALYs. Deterministic one-way and probabilistic sensitivity analyses examined parameter uncertainty. Lifetime costs and effectiveness were $137,403 and 5.45 QALYs with the prognostic test and $127,359 and 5.17 QALYs with the SoC. The resulting incremental cost-effectiveness ratio for the prognostic test versus the SoC was $35,867/QALY gained. One-way sensitivity analyses indicated the model was most sensitive to the utility of patients without recurrence after ACT and the ACT treatment benefit. Probabilistic sensitivity analysis indicated the prognostic test was cost-effective in 65.5% of simulations at a willingness to pay of $50,000/QALY. The study suggests using a prognostic test to guide ACT decisions in early-stage NSCLC is potentially cost-effective compared with using the SoC based on globally accepted willingness-to-pay thresholds. Providing prognostic information to decision makers may help some patients with high-risk early stage non-small cell lung cancer receive appropriate adjuvant chemotherapy while avoiding the associated toxicities and costs in patients with low-risk disease. This study used an economic model to assess the effectiveness and costs associated with using a prognostic test to guide adjuvant chemotherapy decisions compared with the current standard of care in patients with non-small cell lung cancer. When compared with current standard care, the prognostic test was potentially cost effective at commonly accepted thresholds in the U.S. This study can be used to help inform decision makers who are considering using prognostic tests. ©AlphaMed Press.
Chatziprodromidou, I P; Apostolou, T
2018-04-01
The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.
Tsamados, Michel; Feltham, Daniel; Petty, Alek; Schroeder, David; Flocco, Daniela
2015-10-13
We present a modelling study of processes controlling the summer melt of the Arctic sea ice cover. We perform a sensitivity study and focus our interest on the thermodynamics at the ice-atmosphere and ice-ocean interfaces. We use the Los Alamos community sea ice model CICE, and additionally implement and test three new parametrization schemes: (i) a prognostic mixed layer; (ii) a three equation boundary condition for the salt and heat flux at the ice-ocean interface; and (iii) a new lateral melt parametrization. Recent additions to the CICE model are also tested, including explicit melt ponds, a form drag parametrization and a halodynamic brine drainage scheme. The various sea ice parametrizations tested in this sensitivity study introduce a wide spread in the simulated sea ice characteristics. For each simulation, the total melt is decomposed into its surface, bottom and lateral melt components to assess the processes driving melt and how this varies regionally and temporally. Because this study quantifies the relative importance of several processes in driving the summer melt of sea ice, this work can serve as a guide for future research priorities. © 2015 The Author(s).
Hwang, Eunjoo; Hu, Jingwen; Chen, Cong; Klein, Katelyn F; Miller, Carl S; Reed, Matthew P; Rupp, Jonathan D; Hallman, Jason J
2016-11-01
Occupant stature and body shape may have significant effects on injury risks in motor vehicle crashes, but the current finite element (FE) human body models (HBMs) only represent occupants with a few sizes and shapes. Our recent studies have demonstrated that, by using a mesh morphing method, parametric FE HBMs can be rapidly developed for representing a diverse population. However, the biofidelity of those models across a wide range of human attributes has not been established. Therefore, the objectives of this study are 1) to evaluate the accuracy of HBMs considering subject-specific geometry information, and 2) to apply the parametric HBMs in a sensitivity analysis for identifying the specific parameters affecting body responses in side impact conditions. Four side-impact tests with two male post-mortem human subjects (PMHSs) were selected to evaluate the accuracy of the geometry and impact responses of the morphed HBMs. For each PMHS test, three HBMs were simulated to compare with the test results: the original Total Human Model for Safety (THUMS) v4.01 (O-THUMS), a parametric THUMS (P-THUMS), and a subject-specific THUMS (S-THUMS). The P-THUMS geometry was predicted from only age, sex, stature, and BMI using our statistical geometry models of skeleton and body shape, while the S-THUMS geometry was based on each PMHS's CT data. The simulation results showed a preliminary trend that the correlations between the PTHUMS- predicted impact responses and the four PMHS tests (mean-CORA: 0.84, 0.78, 0.69, 0.70) were better than those between the O-THUMS and the normalized PMHS responses (mean-CORA: 0.74, 0.72, 0.55, 0.63), while they are similar to the correlations between S-THUMS and the PMHS tests (mean-CORA: 0.85, 0.85, 0.67, 0.72). The sensitivity analysis using the PTHUMS showed that, in side impact conditions, the HBM skeleton and body shape geometries as well as the body posture were more important in modeling the occupant impact responses than the bone and soft tissue material properties and the padding stiffness with the given parameter ranges. More investigations are needed to further support these findings.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
NASA Astrophysics Data System (ADS)
Piruzyan, L. A.; Mikhailovskiy, Ye. M.; Piruzyan, A. L.
1999-12-01
The directions of laboratory and clinical studies oriented to experimental confirmation of the priority concept of `laser histochemical surgery' are presented. The goal of the studies is reproduction on experimental model of a number of pathologies (in vivo and in vitro) of the `sensitization to laser radiation by staining' effect. Testing of the histochemical stains as sensitizers to laser irradiation of their `address substrates', i.e. vitally stained intracellular structures which participate in the pathologic processes evolution is under planning. The processes include: (a) metabolic disorders in the brain cells, i.e. disseminated sclerosis; (b) generalized metabolic disorders- -mucopolysaccharidosis and collagenosises (periarteritis nodosa, rheumatism, rheumatoid arthritis, sclerodermia); (3) metabolic disorders in individual organs--vessel atherosclerosis, hypercholesterolemia, myocardial infarction, cardiosclerosis, caries and parodontosis. The conditions of the studies are detailed in the recommendations along the positions: (1) disease name; (2) disease characteristics: (a) pathomorphologic, (b) biochemical; (3) stains revealing the disease signs and recommended for testing; (4) `address substrates' of the stains that are targets for laser radiation; (5) lasers recommended for the testing after the cells staining in vivo in the corresponding pathology; (6) experimental models of the pathologies suggested for the testing; (7) criteria of the stain efficiency as target sensitizer to the laser light (criteria of the `laser sensitization by staining' efficiency). Possible perspectives for the experimental clinical medicine are indicated of common histochemical stains and lasers use and of practice introduction of the `laser histochemical surgery' in the case the described concept is confirmed in experiments and clinically.
A Galilean Invariant Explicit Algebraic Reynolds Stress Model for Curved Flows
NASA Technical Reports Server (NTRS)
Girimaji, Sharath
1996-01-01
A Galilean invariant weak-equilbrium hypothesis that is sensitive to streamline curvature is proposed. The hypothesis leads to an algebraic Reynolds stress model for curved flows that is fully explicit and self-consistent. The model is tested in curved homogeneous shear flow: the agreement is excellent with Reynolds stress closure model and adequate with available experimental data.
Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong
2016-02-01
In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.
Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane
NASA Technical Reports Server (NTRS)
Syberg, J.; Koncsek, J. L.
1977-01-01
Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.
Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao
2016-12-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. © The Author(s) 2014.
Ma, Xiaoye; Chen, Yong; Cole, Stephen R.; Chu, Haitao
2014-01-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. PMID:24862512
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Using Data Mining for Wine Quality Assessment
NASA Astrophysics Data System (ADS)
Cortez, Paulo; Teixeira, Juliana; Cerdeira, António; Almeida, Fernando; Matos, Telmo; Reis, José
Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.
Characteristics of Pressure Sensitive Paint Intrusiveness Effects on Aerodynamic Data
NASA Technical Reports Server (NTRS)
Amer, Tahani R.; Liu, Tianshu; Oglesby, Donald M.
2001-01-01
One effect of using pressure sensitive paint (PSP) is the potential intrusiveness to the aerodynamic characteristics of the model. The paint thickness and roughness may affect the pressure distribution, and therefore, the forces and moments on the wind tunnel model. A study of these potential intrusive effects was carried out at NASA Langley Research Center where a series of wind tunnel tests were conducted using the Modem Design of Experiments (MDOE) test approach. The PSP effects on the integrated forces were measured on two different models at different test conditions in both the Low Turbulence Pressure Tunnel (LTPT) and the Unitary Plan Wind Tunnel (UPWT) at Langley. The paint effect was found to be very small over a range of Reynolds numbers, Mach numbers and angles of attack. This is due to the very low surface roughness of the painted surface. The surface roughness, after applying the NASA Langley developed PSP, was lower than that of the clean wing. However, the PSP coating had a localized effects on the pressure taps, which leads to an appreciable decrease in the pressure tap reading.
Suh, Mina; Troese, Matthew J; Hall, Debra A; Yasso, Blair; Yzenas, John J; Proctor, Debora M
2014-12-01
Electric arc furnace (EAF) steel slag is alkaline (pH of ~11-12) and contains metals, most notably chromium and nickel, and thus has potential to cause dermal irritation and sensitization at sufficient dose. Dermal contact with EAF slag occurs in many occupational and environmental settings because it is used widely in construction and other industrial sectors for various applications including asphaltic paving, road bases, construction fill, and as feed for cement kilns construction. However, no published study has characterized the potential for dermal effects associated with EAF slag. To assess dermal irritation, corrosion and sensitizing potential of EAF slag, in vitro and in vivo dermal toxicity assays were conducted based on the Organisation for Economic Co-operation and Development (OECD) guidelines. In vitro dermal corrosion and irritation testing (OECD 431 and 439) of EAF slag was conducted using the reconstructed human epidermal (RHE) tissue model. In vivo dermal toxicity and delayed contact sensitization testing (OECD 404 and 406) were conducted in rabbits and guinea pigs, respectively. EAF slag was not corrosive and not irritating in any tests. The results of the delayed contact dermal sensitization test indicate that EAF slag is not a dermal sensitizer. These findings are supported by the observation that metals in EAF slag occur as oxides of low solubility with leachates that are well below toxicity characteristic leaching procedure (TCLP) limits. Based on these results and in accordance to the OECD guidelines, EAF slag is not considered a dermal sensitizer, corrosive or irritant. Copyright © 2014 John Wiley & Sons, Ltd.
Evaluation of electrolytic tilt sensors for measuring model angle of attack in wind tunnel tests
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
1992-01-01
The results of a laboratory evaluation of electrolytic tilt sensors as potential candidates for measuring model attitude or angle of attack in wind tunnel tests are presented. The performance of eight electrolytic tilt sensors was compared with that of typical servo accelerometers used for angle-of-attack measurements. The areas evaluated included linearity, hysteresis, repeatability, temperature characteristics, roll-on-pitch interaction, sensitivity to lead-wire resistance, step response time, and rectification. Among the sensors being evaluated, the Spectron model RG-37 electrolytic tilt sensors have the highest overall accuracy in terms of linearity, hysteresis, repeatability, temperature sensitivity, and roll sensitivity. A comparison of the sensors with the servo accelerometers revealed that the accuracy of the RG-37 sensors was on the average about one order of magnitude worse. Even though a comparison indicates that the cost of each tilt sensor is about one-third the cost of each servo accelerometer, the sensors are considered unsuitable for angle-of-attack measurements. However, the potential exists for other applications such as wind tunnel wall-attitude measurements where the errors resulting from roll interaction, vibration, and response time are less and sensor temperature can be controlled.
Nguyen, L T H; Janssen, C R
2002-02-01
Embryo-larval toxicity tests with the African catfish (Clarias gariepinus) were performed to assess the comparative sensitivity of different endpoints. Measured test responses included embryo and larval survival, hatching, morphological development, and larval growth. Chromium, cadmium, copper, sodium pentachlorphenol (NaPCP), and malathion were used as model toxicants. Hatching was not affected by any of the chemicals tested, and embryo survival was only affected by chromium at > or = 36 mg/L. The growth of larvae was significantly reduced at > or = 11 mg/L Cr, > or = 0.63 mg/L Cu, > or = 0.03 mg/L NaPCP, and > or = 1.25 mg/L malathion. Morphological development of C. gariepinus was affected by all of the toxicants tested. Different types of morphological aberrations were observed, i.e., reduction of pigmentation in fish exposed to cadmium and copper, yolk sac edema in fish exposed to NaPCP and malathion, and deformation of the notochord in fish exposed to chromium and malathion. The sensitivity of the endpoints measured can be summarized as follows: growth > abnormality > larval survival > embryo survival > hatching.
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Testing the Model of Stigma Communication with a Factorial Experiment in an Interpersonal Context
Smith, Rachel A.
2014-01-01
Stigmas may regulate intergroup relationships; they may also influence interpersonal actions. This study extends the previous test of the model of stigma communication (Smith, 2012) with a factorial experiment in which the outcomes refer to a hypothetical acquaintance. New affective reactions, sympathy and frustration, and a new personality trait, disgust sensitivity, were explored. In addition, perceived severity and susceptibility of the infection were included as alternative mechanisms explaining the effects. The results (n = 318) showed that message content, message reactions (emotional and cognitive), and disgust sensitivity predicted intentions to regulate the infected acquaintance’s interactions and lifestyle (R2 = .79) and participants’ likelihood of telling others about the acquaintance’s infection (R2 = .35). The findings generally provided support for MSC and directions for improvement. PMID:25425853
Constitutive Modeling of the Dynamic-Tensile-Extrusion Test of PTFE
NASA Astrophysics Data System (ADS)
Resnyansky, Anatoly; Brown, Eric; Trujillo, Carl; Gray, George
2015-06-01
Use of polymers in the defence, aerospace and industrial application at extreme conditions makes prediction of behaviour of these materials very important. Crucial to this is knowledge of the physical damage response in association with the phase transformations during the loading and the ability to predict this via multi-phase simulation taking the thermodynamical non-equilibrium and strain rate sensitivity into account. The current work analyses Dynamic-Tensile-Extrusion (DTE) experiments on polytetrafluoroethylene (PTFE). In particular, the phase transition during the loading with subsequent tension are analysed using a two-phase rate sensitive material model implemented in the CTH hydrocode and the calculations are compared with experimental high-speed photography. The damage patterns and their link with the change of loading modes are analysed numerically and are correlated to the test observations.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Lotz, Thomas F; Chase, J Geoffrey; McAuley, Kirsten A; Shaw, Geoffrey M; Docherty, Paul D; Berkeley, Juliet E; Williams, Sheila M; Hann, Christopher E; Mann, Jim I
2010-11-01
Insulin resistance is a significant risk factor in the pathogenesis of type 2 diabetes. This article presents pilot study results of the dynamic insulin sensitivity and secretion test (DISST), a high-resolution, low-intensity test to diagnose insulin sensitivity (IS) and characterize pancreatic insulin secretion in response to a (small) glucose challenge. This pilot study examines the effect of glucose and insulin dose on the DISST, and tests its repeatability. DISST tests were performed on 16 subjects randomly allocated to low (5 g glucose, 0.5 U insulin), medium (10 g glucose, 1 U insulin) and high dose (20 g glucose, 2 U insulin) protocols. Two or three tests were performed on each subject a few days apart. Average variability in IS between low and medium dose was 10.3% (p=.50) and between medium and high dose 6.0% (p=.87). Geometric mean variability between tests was 6.0% (multiplicative standard deviation (MSD) 4.9%). Geometric mean variability in first phase endogenous insulin response was 6.8% (MSD 2.2%). Results were most consistent in subjects with low IS. These findings suggest that DISST may be an easily performed dynamic test to quantify IS with high resolution, especially among those with reduced IS. © 2010 Diabetes Technology Society.
Tong, Xiuli; He, Xinjie; Deacon, S Hélène
2017-02-01
Languages differ considerably in how they use prosodic features, or variations in pitch, duration, and intensity, to distinguish one word from another. Prosodic features include lexical tone in Chinese and lexical stress in English. Recent cross-sectional studies show a surprising result that Mandarin Chinese tone sensitivity is related to Mandarin-English bilingual children's English word reading. This study explores the mechanism underlying this relation by testing two explanations of these effects: the prosodic hypothesis and segmental phonological awareness transfer. We administered multiple measures of Cantonese tone sensitivity, English stress sensitivity, segmental phonological awareness in Cantonese and English, nonverbal ability, and English word reading to 123 Cantonese-English bilingual children ages 7 and 8 years. Structural equation modeling revealed a longitudinal prediction of Cantonese tone sensitivity to English word reading between 8 and 9 years of age. This relation was realized through two parallel routes. In one, Cantonese tone sensitivity predicted English stress sensitivity, and English stress sensitivity, in turn, significantly predicted English word reading, as postulated by the prosodic hypothesis. In the second, Cantonese tone sensitivity predicted English word reading through the transfer of segmental phonological awareness between Cantonese and English, as predicted by segmental phonological transfer. These results support a unified model of phonological transfer, emphasizing the role of tone in English word reading for Cantonese-English bilingual children.
Jang, Won-hee; Jung, Kyoung-mi; Yang, Hye-ri; Lee, Miri; Jung, Haeng-Sun; Lee, Su-Hyon; Park, Miyoung; Lim, Kyung-Min
2015-01-01
The eye irritation potential of drug candidates or pharmaceutical ingredients should be evaluated if there is a possibility of ocular exposure. Traditionally, the ocular irritation has been evaluated by the rabbit Draize test. However, rabbit eyes are more sensitive to irritants than human eyes, therefore substantial level of false positives are unavoidable. To resolve this species difference, several three-dimensional human corneal epithelial (HCE) models have been developed as alternative eye irritation test methods. Recently, we introduced a new HCE model, MCTT HCETM which is reconstructed with non-transformed human corneal cells from limbal tissues. Here, we examined if MCTT HCETM can be employed to evaluate eye irritation potential of solid substances. Through optimization of washing method and exposure time, treatment time was established as 10 min and washing procedure was set up as 4 times of washing with 10 mL of PBS and shaking in 30 mL of PBS in a beaker. With the established eye irritation test protocol, 11 solid substances (5 non-irritants, 6 irritants) were evaluated which demonstrated an excellent predictive capacity (100% accuracy, 100% specificity and 100% sensitivity). We also compared the performance of our test method with rabbit Draize test results and in vitro cytotoxicity test with 2D human corneal epithelial cell lines. PMID:26157556
George, Steven Z; Wittmer, Virgil T; Fillingim, Roger B; Robinson, Michael E
2006-03-01
Quantitative sensory testing has demonstrated a promising link between experimentally determined pain sensitivity and clinical pain. However, previous studies of quantitative sensory testing have not routinely considered the important influence of psychological factors on clinical pain. This study investigated whether measures of thermal pain sensitivity (temporal summation, first pulse response, and tolerance) contributed to clinical pain reports for patients with chronic low back pain, after controlling for depression or fear-avoidance beliefs about work. Consecutive patients (n=27) with chronic low back pain were recruited from an interdisciplinary pain rehabilitation program in Jacksonville, FL. Patients completed validated self-report questionnaires for depression, fear-avoidance beliefs, clinical pain intensity, and clinical pain related disability. Patients also underwent quantitative sensory testing from previously described protocols to determine thermal pain sensitivity (temporal summation, first pulse response, and tolerance). Hierarchical regression models investigated the contribution of depression and thermal pain sensitivity to clinical pain intensity, and fear-avoidance beliefs and thermal pain sensitivity to clinical pain related disability. None of the measures of thermal pain sensitivity contributed to clinical pain intensity after controlling for depression. Temporal summation of evoked thermal pain significantly contributed to clinical pain disability after controlling for fear-avoidance beliefs about work. Measures of thermal pain sensitivity did not contribute to pain intensity, after controlling for depression. Fear-avoidance beliefs about work and temporal summation of evoked thermal pain significantly influenced pain related disability. These factors should be considered as potential outcome predictors for patients with work-related low back pain. This study supported the neuromatrix theory of pain for patients with CLBP, as cognitive-evaluative factor contributed to pain perception, and cognitive-evaluative and sensory-discriminative factors uniquely contributed to an action program in response to chronic pain. Future research will determine if a predictive model consisting of fear-avoidance beliefs and temporal summation of evoked thermal pain has predictive validity for determining clinical outcome in rehabilitation or vocational settings.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
Stress in adolescence and drugs of abuse in rodent models: Role of dopamine, CRF, and HPA axis
Burke, Andrew R.; Miczek, Klaus A.
2014-01-01
Rationale Research on adolescence and drug abuse increased substantially in the past decade. However, drug-addiction related behaviors following stressful experiences during adolescence are less studied. We focus on rodent models of adolescent stress cross-sensitization to drugs of abuse. Objectives Review the ontogeny of behavior, dopamine, corticotropin-releasing factor (CRF), and the hypothalamic pituitary adrenal (HPA) axis in adolescent rodents. We evaluate evidence that stressful experiences during adolescence engender hypersensitivity to drugs of abuse and offer potential neural mechanisms. Results and Conclusions Much evidence suggests that final maturation of behavior, dopamine systems, and HPA axis occurs during adolescence. Stress during adolescence increases amphetamine- and ethanol-stimulated locomotion, preference, and self-administration under many conditions. The influence of adolescent stress on subsequent cocaine- and nicotine-stimulated locomotion and preference is less clear. The type of adolescent stress, temporal interval between stress and testing, species, sex, and the drug tested are key methodological determinants for successful cross-sensitization procedures. The sensitization of the mesolimbic dopamine system is proposed to underlie stress cross-sensitization to drugs of abuse in both adolescents and adults through modulation by CRF. Reduced levels of mesocortical dopamine appear to be a unique consequence of social stress during adolescence. Adolescent stress may reduce the final maturation of cortical dopamine through D2 dopamine receptor regulation of dopamine synthesis or glucocorticoid-facilitated pruning of cortical dopamine fibers. Certain rodent models of adolescent adversity are useful for determining neural mechanisms underlying the cross-sensitization to drugs of abuse. PMID:24370534
Avonto, Cristina; Chittiboyina, Amar G; Rua, Diego; Khan, Ikhlas A
2015-12-01
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles after incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, 'HTS-DCYA assay', is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. Copyright © 2015 Elsevier Inc. All rights reserved.
The development of a 3D immunocompetent model of human skin.
Chau, David Y S; Johnson, Claire; MacNeil, Sheila; Haycock, John W; Ghaemmaghami, Amir M
2013-09-01
As the first line of defence, skin is regularly exposed to a variety of biological, physical and chemical insults. Therefore, determining the skin sensitization potential of new chemicals is of paramount importance from the safety assessment and regulatory point of view. Given the questionable biological relevance of animal models to human as well as ethical and regulatory pressure to limit or stop the use of animal models for safety testing, there is a need for developing simple yet physiologically relevant models of human skin. Herein, we describe the construction of a novel immunocompetent 3D human skin model comprising of dendritic cells co-cultured with keratinocytes and fibroblasts. This model culture system is simple to assemble with readily-available components and importantly, can be separated into its constitutive individual layers to allow further insight into cell-cell interactions and detailed studies of the mechanisms of skin sensitization. In this study, using non-degradable microfibre scaffolds and a cell-laden gel, we have engineered a multilayer 3D immunocompetent model comprised of keratinocytes and fibroblasts that are interspersed with dendritic cells. We have characterized this model using a combination of confocal microscopy, immuno-histochemistry and scanning electron microscopy and have shown differentiation of the epidermal layer and formation of an epidermal barrier. Crucially the immune cells in the model are able to migrate and remain responsive to stimulation with skin sensitizers even at low concentrations. We therefore suggest this new biologically relevant skin model will prove valuable in investigating the mechanisms of allergic contact dermatitis and other skin pathologies in human. Once fully optimized, this model can also be used as a platform for testing the allergenic potential of new chemicals and drug leads.
Crack propagation and arrest in CFRP materials with strain softening regions
NASA Astrophysics Data System (ADS)
Dilligan, Matthew Anthony
Understanding the growth and arrest of cracks in composite materials is critical for their effective utilization in fatigue-sensitive and damage susceptible applications such as primary aircraft structures. Local tailoring of the laminate stack to provide crack arrest capacity intermediate to major structural components has been investigated and demonstrated since some of the earliest efforts in composite aerostructural design, but to date no rigorous model of the crack arrest mechanism has been developed to allow effective sizing of these features. To address this shortcoming, the previous work in the field is reviewed, with particular attention to the analysis methodologies proposed for similar arrest features. The damage and arrest processes active in such features are investigated, and various models of these processes are discussed and evaluated. Governing equations are derived based on a proposed mechanistic model of the crack arrest process. The derived governing equations are implemented in a numerical model, and a series of simulations are performed to ascertain the general characteristics of the proposed model and allow qualitative comparison to existing experimental results. The sensitivity of the model and the arrest process to various parameters is investigated, and preliminary conclusions regarding the optimal feature configuration are developed. To address deficiencies in the available material and experimental data, a series of coupon tests are developed and conducted covering a range of arrest zone configurations. Test results are discussed and analyzed, with a particular focus on identification of the proposed failure and arrest mechanisms. Utilizing the experimentally derived material properties, the tests are reproduced with both the developed numerical tool as well as a FEA-based implementation of the arrest model. Correlation between the simulated and experimental results is analyzed, and future avenues of investigation are identified. Utilizing the developed model, a sensitivity study is conducted to assess the current proposed arrest configuration. Optimum distribution and sizing of the arrest zones is investigated, and general design guidelines are developed.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Tsyrlin, Vitaly A.; Galagudza, Michael M.; Kuzmenko, Nataly V.; Pliss, Michael G.; Rubanova, Nataly S.; Shcherbin, Yury I.
2013-01-01
Introduction The present study tested the hypothesis that long-term effects of baroreceptor activation might contribute to the prevention of persistent arterial blood pressure (BP) increase in the rat model of renovascular hypertension (HTN). Methods Repetitive arterial baroreflex (BR) testing was performed in normo- and hypertensive rats. The relationship between initial arterial BR sensitivity and severity of subsequently induced two-kidney one-clip (2K1C) renovascular HTN was studied in Wistar rats. Additionally, the time course of changes in systolic BP (SBP) and cardiac beat-to-beat (RR) interval was studied for 8 weeks after the induction of 2K1C renovascular HTN in the rats with and without sinoaortic denervation (SAD). In a separate experimental series, cervical sympathetic nerve activity (cSNA) was assessed in controls, 2K1C rats, WKY rats, and SHR. Results The inverse correlation between arterial BR sensitivity and BP was observed in the hypertensive rats during repetitive arterial BR testing. The animals with greater initial arterial BR sensitivity developed lower BP values after renal artery clipping than those with lower initial arterial BR sensitivity. BP elevation during the first 8 weeks of renal artery clipping in 2K1C rats was associated with decreased sensitivity of arterial BR. Although SAD itself resulted only in greater BP variability but not in persistent BP rise, the subsequent renal artery clipping invariably resulted in the development of sustained HTN. The time to onset of HTN was found to be shorter in the rats with SAD than in those with intact baroreceptors. cSNA was significantly greater in the 2K1C rats than in controls. Conclusions Arterial BR appears to be an important mechanism of long-term regulation of BP, and is believed to be involved in the prevention of BP rise in the rat model of renovascular HTN. PMID:23762254
Farris, Samantha G.; Leventhal, Adam M.; Schmidt, Norman B.; Zvolensky, Michael J.
2015-01-01
Objective: Anxiety sensitivity appears to be relevant in understanding the nature of emotional symptoms and disorders associated with smoking. Negative-reinforcement smoking expectancies and motives are implicated as core regulatory processes that may explain, in part, the anxiety sensitivity–smoking interrelations; however, these pathways have received little empirical attention. Method: Participants (N = 471) were adult treatment-seeking daily smokers assessed for a smoking-cessation trial who provided baseline data; 157 participants provided within-treatment (pre-cessation) data. Anxiety sensitivity was examined as a cross-sectional predictor of several baseline smoking processes (nicotine dependence, perceived barriers to cessation, severity of prior withdrawal-related quit problems) and pre-cessation processes including nicotine withdrawal and smoking urges (assessed during 3 weeks before the quit day). Baseline negative-reinforcement smoking expectancies and motives were tested as simultaneous mediators via parallel multiple mediator models. Results: Higher levels of anxiety sensitivity were related to higher levels of nicotine dependence, greater perceived barriers to smoking cessation, more severe withdrawal-related problems during prior quit attempts, and greater average withdrawal before the quit day; effects were indirectly explained by the combination of both mediators. Higher levels of anxiety sensitivity were not directly related to pre-cessation smoking urges but were indirectly related through the independent and combined effects of the mediators. Conclusions: These empirical findings bolster theoretical models of anxiety sensitivity and smoking and identify targets for nicotine dependence etiology research and cessation interventions. PMID:25785807
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.
Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4more » km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.« less
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
NASA Astrophysics Data System (ADS)
Shellito, Cindy J.; Sloan, Lisa C.
2006-02-01
Models that allow vegetation to respond to and interact with climate provide a unique method for addressing questions regarding feedbacks between the ecosystem and climate in pre-Quaternary time periods. In this paper, we consider how Dynamic Global Vegetation Models (DGVMs), which have been developed for simulations with present day climate, can be used for paleoclimate studies. We begin with a series of tests in the NCAR Land Surface Model (LSM)-DGVM with Eocene geography to examine (1) the effect of removing C 4 grasses from the available plant functional types in the model; (2) model sensitivity to a change in soil texture; and (3), model sensitivity to a change in the value of pCO 2 used in the photosynthetic rate equations. The tests were designed to highlight some of the challenges of using these models and prompt discussion of possible improvements. We discuss how lack of detail in model boundary conditions, uncertainties in the application of modern plant functional types to paleo-flora simulations, and inaccuracies in the model climatology used to drive the DGVM can affect interpretation of model results. However, we also review a number of DGVM features that can facilitate understanding of past climates and offer suggestions for improving paleo-DGVM studies.
Tosi, L L; Detsky, A S; Roye, D P; Morden, M L
1987-01-01
Using a decision analysis model, we estimated the savings that might be derived from a mass prenatal screening program aimed at detecting open neural tube defects (NTDs) in low-risk pregnancies. Our baseline analysis showed that screening v. no screening could be expected to save approximately $8 per pregnancy given a cost of $7.50 for the maternal serum alpha-feto-protein (MSAFP) test and a cost of $42,507 for hospital and rehabilitation services for the first 10 years of life for a child with spina bifida. When a more liberal estimate of the costs of caring for such a child was used, the savings with the screening program were more substantial. We performed extensive sensitivity analyses, which showed that the savings were somewhat sensitive to the cost of the MSAFP test and highly sensitive to the specificity (but not the sensitivity) of the test. A screening program for NTDs in low-risk pregnancies may result in substantial savings in direct health care costs if the screening protocol is followed rigorously and efficiently. PMID:2433011
NASA Astrophysics Data System (ADS)
Muldoon, Gail; Jackson, Charles S.; Young, Duncan A.; Quartini, Enrica; Cavitte, Marie G. P.; Blankenship, Donald D.
2017-04-01
Information about the extent and dynamics of the West Antarctic Ice Sheet during past glaciations is preserved inside ice sheets themselves. Ice cores are capable of retrieving information about glacial history, but they are spatially sparse. Ice-penetrating radar, on the other hand, has been used to map large areas of the West Antarctic Ice Sheet and can be correlated to ice core chronologies. Englacial isochronous layers observed in ice-penetrating radar are the result of variations in ice composition, fabric, temperature and other factors. The shape of these isochronous surfaces is expected to encode information about past and present boundary conditions and ice dynamics. Dipping of englacial layers, for example, may reveal the presence of rapid ice flow through paleo ice streams or high geothermal heat flux. These layers therefore present a useful testbed for hypotheses about paleo ice sheet conditions. However, hypothesis testing requires careful consideration of the sensitivity of layer shape to the competing forces of ice sheet boundary conditions and ice dynamics over time. Controlled sensitivity tests are best completed using models, however ice sheet models generally do not have the capability of simulating layers in the presence of realistic boundary conditions. As such, modeling 3D englacial layers for comparison to observations is difficult and requires determination of a 3D ice velocity field. We present a method of post-processing simulated 3D ice sheet velocities into englacial isochronous layers using an advection scheme. We then test the sensitivity of layer geometry to uncertain boundary conditions, including heterogeneous subglacial geothermal flux and bedrock topography. By identifying areas of the ice sheet strongly influenced by boundary conditions, it may be possible to isolate the signature of paleo ice dynamics in the West Antarctic ice sheet.
Mwanza, Jean-Claude; Warren, Joshua L; Hochberg, Jessica T; Budenz, Donald L; Chang, Robert T; Ramulu, Pradeep Y
2015-01-01
To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. One hundred ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike's information criterion (AIC), and prediction confidence interval lengths. For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDx-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT×NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single-variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAP-FDT, and interaction GDx-TSNIT×NAP-FDT consistently provided better discriminating abilities for detecting early, moderate, and severe glaucoma than the best single-variable models. The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDx-TSNIT×NAP-FDT provides the best glaucoma prediction compared with all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared with using GDx or FDT alone.
Quantifying the effect of side branches in endothelial shear stress estimates
Giannopoulos, Andreas A.; Chatzizisis, Yiannis S.; Maurovich-Horvat, Pal; Antoniadis, Antonios P.; Hoffmann, Udo; Steigner, Michael L.; Rybicki, Frank J.; Mitsouras, Dimitrios
2016-01-01
Background and aims Low and high endothelial shear stress (ESS) is associated with coronary atherosclerosis progression and high-risk plaque features. Coronary ESS is currently assessed via computational fluid dynamic (CFD) simulation in the lumen geometry determined from invasive imaging such as intravascular ultrasound and optical coherence tomography. This process typically omits side branches of the target vessel in the CFD model as invasive imaging of those vessels is not clinically-indicated. The purpose of this study was to determine the extent to which this simplification affects the determination of those regions of the coronary endothelium subjected to pathologic ESS. Methods We determined the diagnostic accuracy of ESS profiling without side branches to detect pathologic ESS in the major coronary arteries of 5 hearts imaged ex vivo with CT angiography. ESS of the three major coronary arteries was calculated both without (test model), and with (reference model) inclusion of all side branches >1.5 mm in diameter, using previously-validated CFD approaches. Diagnostic test characteristics (accuracy, sensitivity, specificity and negative and positive predictive value [NPV/PPV]) with respect to the reference model were assessed for both the entire length as well as only the proximal portion of each major coronary artery, where the majority of high-risk plaques occur. Results Using the model without side branches overall accuracy, sensitivity, specificity, NPV and PPV were 83.4%, 54.0%, 96%, 95.9% and 55.1%, respectively to detect low ESS, and 87.0%, 67.7%, 90.7%, 93.7% and 57.5%, respectively to detect high ESS. When considering only the proximal arteries, test characteristics differed for low and high ESS, with low sensitivity (67.7%) and high specificity (90.7%) to detect low ESS, and low sensitivity (44.7%) and high specificity (95.5%) to detect high ESS. Conclusions The exclusion of side branches in ESS vascular profiling studies greatly reduces the ability to detect regions of the major coronary arteries subjected to pathologic ESS. Single-conduit models can in general only be used to rule out pathologic ESS. PMID:27372207
Quantifying the effect of side branches in endothelial shear stress estimates.
Giannopoulos, Andreas A; Chatzizisis, Yiannis S; Maurovich-Horvat, Pal; Antoniadis, Antonios P; Hoffmann, Udo; Steigner, Michael L; Rybicki, Frank J; Mitsouras, Dimitrios
2016-08-01
Low and high endothelial shear stress (ESS) is associated with coronary atherosclerosis progression and high-risk plaque features. Coronary ESS is currently assessed via computational fluid dynamic (CFD) simulation of coronary blood flow in the lumen geometry determined from invasive imaging such as intravascular ultrasound and optical coherence tomography. This process typically omits side branches of the target vessel in the CFD model as invasive imaging of those vessels is not usually clinically-indicated. The purpose of this study was to determine the extent to which this simplification affects the determination of those regions of the coronary endothelium subjected to pathologic ESS. We determined the diagnostic accuracy of ESS profiling without side branches to detect pathologic ESS in the major coronary arteries of 5 hearts imaged ex vivo with computed tomography angiography (CTA). ESS of the three major coronary arteries was calculated both without (test model), and with (reference model) inclusion of all side branches >1.5 mm in diameter, using previously-validated CFD approaches. Diagnostic test characteristics (accuracy, sensitivity, specificity and negative and positive predictive value [NPV/PPV]) with respect to the reference model were assessed for both the entire length as well as only the proximal portion of each major coronary artery, where the majority of high-risk plaques occur. Using the model without side branches overall accuracy, sensitivity, specificity, NPV and PPV were 83.4%, 54.0%, 96%, 95.9% and 55.1%, respectively to detect low ESS, and 87.0%, 67.7%, 90.7%, 93.7% and 57.5%, respectively to detect high ESS. When considering only the proximal arteries, test characteristics differed for low and high ESS, with low sensitivity (67.7%) and high specificity (90.7%) to detect low ESS, and low sensitivity (44.7%) and high specificity (95.5%) to detect high ESS. The exclusion of side branches in ESS vascular profiling studies greatly reduces the ability to detect regions of the major coronary arteries subjected to pathologic ESS. Single-conduit models can in general only be used to rule out pathologic ESS. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Henrich, Florian; Magerl, Walter; May, Arne
2014-01-01
This study tested a modified experimental model of heat-induced hyperalgesia, which improves the efficacy to induce primary and secondary hyperalgesia and the efficacy-to-safety ratio reducing the risk of tissue damage seen in other heat pain models. Quantitative sensory testing was done in eighteen healthy volunteers before and after repetitive heat pain stimuli (60 stimuli of 48°C for 6 s) to assess the impact of repetitive heat on somatosensory function in conditioned skin (primary hyperalgesia area) and in adjacent skin (secondary hyperalgesia area) as compared to an unconditioned mirror image control site. Additionally, areas of flare and secondary hyperalgesia were mapped, and time course of hyperalgesia determined. After repetitive heat pain conditioning we found significant primary hyperalgesia to heat, and primary and secondary hyperalgesia to pinprick and to light touch (dynamic mechanical allodynia). Acetaminophen (800 mg) reduced pain to heat or pinpricks only marginally by 11% and 8%, respectively (n.s.), and had no effect on heat hyperalgesia. In contrast, the areas of flare (−31%) and in particular of secondary hyperalgesia (−59%) as well as the magnitude of hyperalgesia (−59%) were significantly reduced (all p<0.001). Thus, repetitive heat pain induces significant peripheral sensitization (primary hyperalgesia to heat) and central sensitization (punctate hyperalgesia and dynamic mechanical allodynia). These findings are relevant to further studies using this model of experimental heat pain as it combines pronounced peripheral and central sensitization, which makes a convenient model for combined pharmacological testing of analgesia and anti-hyperalgesia mechanisms related to thermal and mechanical input. PMID:24911787
Comparative effects of pH and Vision herbicide on two life stages of four anuran amphibian species.
Edginton, Andrea N; Sheridan, Patrick M; Stephenson, Gerald R; Thompson, Dean G; Boermans, Herman J
2004-04-01
Vision, a glyphosate-based herbicide containing a 15% (weight:weight) polyethoxylated tallow amine surfactant blend, and the concurrent factor of pH were tested to determine their interactive effects on early life-stage anurans. Ninety-six-hour laboratory static renewal studies, using the embryonic and larval life stages (Gosner 25) of Rana clamitans, R. pipiens, Bufo americanus, and Xenopus laevis, were performed under a central composite rotatable design. Mortality and the prevalence of malformations were modeled using generalized linear models with a profile deviance approach for obtaining confidence intervals. There was a significant (p < 0.05) interaction of pH with Vision concentration in all eight models, such that the toxicity of Vision was amplified by elevated pH. The surfactant is the major toxic component of Vision and is hypothesized, in this study, to be the source of the pH interaction. Larvae of B. americanus and R. clamitans were 1.5 to 3.8 times more sensitive than their corresponding embryos, whereas X. laevis and R. pipiens larvae were 6.8 to 8.9 times more sensitive. At pH values above 7.5, the Vision concentrations expected to kill 50% of the test larvae in 96-h (96-h lethal concentration [LC50]) were predicted to be below the expected environmental concentration (EEC) as calculated by Canadian regulatory authorities. The EEC value represents a worst-case scenario for aerial Vision application and is calculated assuming an application of the maximum label rate (2.1 kg acid equivalents [a.e.]/ha) into a pond 15 cm in depth. The EEC of 1.4 mg a.e./L (4.5 mg/L Vision) was not exceeded by 96-h LC50 values for the embryo test. The larvae of the four species were comparable in sensitivity. Field studies should be completed using the more sensitive larval life stage to test for Vision toxicity at actual environmental concentrations.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Ochodo, Eleanor A; Gopalakrishna, Gowri; Spek, Bea; Reitsma, Johannes B; van Lieshout, Lisette; Polman, Katja; Lamberton, Poppy; Bossuyt, Patrick Mm; Leeflang, Mariska Mg
2015-01-01
Background Point-of-care (POC) tests for diagnosing schistosomiasis include tests based on circulating antigen detection and urine reagent strip tests. If they had sufficient diagnostic accuracy they could replace conventional microscopy as they provide a quicker answer and are easier to use. Objectives To summarise the diagnostic accuracy of: a) urine reagent strip tests in detecting active Schistosoma haematobium infection, with microscopy as the reference standard; and b) circulating antigen tests for detecting active Schistosoma infection in geographical regions endemic for Schistosoma mansoni or S. haematobium or both, with microscopy as the reference standard. Search methods We searched the electronic databases MEDLINE, EMBASE, BIOSIS, MEDION, and Health Technology Assessment (HTA) without language restriction up to 30 June 2014. Selection criteria We included studies that used microscopy as the reference standard: for S. haematobium, microscopy of urine prepared by filtration, centrifugation, or sedimentation methods; and for S. mansoni, microscopy of stool by Kato-Katz thick smear. We included studies on participants residing in endemic areas only. Data collection and analysis Two review authors independently extracted data, assessed quality of the data using QUADAS-2, and performed meta-analysis where appropriate. Using the variability of test thresholds, we used the hierarchical summary receiver operating characteristic (HSROC) model for all eligible tests (except the circulating cathodic antigen (CCA) POC for S. mansoni, where the bivariate random-effects model was more appropriate). We investigated heterogeneity, and carried out indirect comparisons where data were sufficient. Results for sensitivity and specificity are presented as percentages with 95% confidence intervals (CI). Main results We included 90 studies; 88 from field settings in Africa. The median S. haematobium infection prevalence was 41% (range 1% to 89%) and 36% for S. mansoni (range 8% to 95%). Study design and conduct were poorly reported against current standards. Tests for S. haematobium Urine reagent test strips versus microscopy Compared to microscopy, the detection of microhaematuria on test strips had the highest sensitivity and specificity (sensitivity 75%, 95% CI 71% to 79%; specificity 87%, 95% CI 84% to 90%; 74 studies, 102,447 participants). For proteinuria, sensitivity was 61% and specificity was 82% (82,113 participants); and for leukocyturia, sensitivity was 58% and specificity 61% (1532 participants). However, the difference in overall test accuracy between the urine reagent strips for microhaematuria and proteinuria was not found to be different when we compared separate populations (P = 0.25), or when direct comparisons within the same individuals were performed (paired studies; P = 0.21). When tests were evaluated against the higher quality reference standard (when multiple samples were analysed), sensitivity was marginally lower for microhaematuria (71% vs 75%) and for proteinuria (49% vs 61%). The specificity of these tests was comparable. Antigen assay Compared to microscopy, the CCA test showed considerable heterogeneity; meta-analytic sensitivity estimate was 39%, 95% CI 6% to 73%; specificity 78%, 95% CI 55% to 100% (four studies, 901 participants). Tests for S. mansoni Compared to microscopy, the CCA test meta-analytic estimates for detecting S. mansoni at a single threshold of trace positive were: sensitivity 89% (95% CI 86% to 92%); and specificity 55% (95% CI 46% to 65%; 15 studies, 6091 participants) Against a higher quality reference standard, the sensitivity results were comparable (89% vs 88%) but specificity was higher (66% vs 55%). For the CAA test, sensitivity ranged from 47% to 94%, and specificity from 8% to 100% (4 studies, 1583 participants). Authors' conclusions Among the evaluated tests for S. haematobium infection, microhaematuria correctly detected the largest proportions of infections and non-infections identified by microscopy. The CCA POC test for S. mansoni detects a very large proportion of infections identified by microscopy, but it misclassifies a large proportion of microscopy negatives as positives in endemic areas with a moderate to high prevalence of infection, possibly because the test is potentially more sensitive than microscopy. Plain Language Summary How well do point-of-care tests detect Schistosoma infections in people living inendemic areas? Schistosomiasis, also known as bilharzia, is a parasitic disease common in the tropical and subtropics. Point-of-care tests and urine reagent strip tests are quicker and easier to use than microscopy. We estimate how well these point-of-care tests are able to detect schistosomiasis infections compared with microscopy. We searched for studies published in any language up to 30 June 2014, and we considered the study’s risk of providing biased results. What do the results say? We included 90 studies involving almost 200,000 people, with 88 of these studies carried out in Africa in field settings. Study design and conduct were poorly reported against current expectations. Based on our statistical model, we found: • Among the urine strips for detecting urinary schistosomiasis, the strips for detecting blood were better than those detecting protein or white cells (sensitivity and specificity for blood 75% and 87%; for protein 61% and 82%; and for white cells 58% and 61%, respectively). • For urinary schistosomiasis, the parasite antigen test performance was worse (sensitivity, 39% and specificity, 78%) than urine strips for detecting blood. • For intestinal schistosomiasis, the parasite antigen urine test, detected many infections identified by microscopy but wrongly labelled many uninfected people as sick (sensitivity, 89% and specificity, 55%). What are the consequences of using these tests? If we take 1000 people, of which 410 have urinary schistosomiasis on microscopy testing, then using the strip detecting blood in the urine would misclassify 77 uninfected people as infected, and thus may receive unnecessary treatment; and it would wrongly classify 102 infected people as uninfected, who thus may not receive treatment. If we take 1000 people, of which 360 have intestinal schistosomiasis on microscopy testing, then the antigen test would misclassify 288 uninfected people as infected. These people may be given unnecessary treatment. This test also would wrongly classify 40 infected people as uninfected who thus may not receive treatment. Conclusion of review For urinary schistosomiasis, the urine strip for detecting blood leads to some infected people being missed and some non-infected people being diagnosed with the condition, but is better than the protein or white cell tests. The parasite antigen test is not accurate. For intestinal schistosomiasis, the parasite antigen urine test can wrongly classify many uninfected people as infected. PMID:25758180
Differences in sensitivity to parenting depending on child temperament: A meta-analysis.
Slagt, Meike; Dubas, Judith Semon; Deković, Maja; van Aken, Marcel A G
2016-10-01
Several models of individual differences in environmental sensitivity postulate increased sensitivity of some individuals to either stressful (diathesis-stress), supportive (vantage sensitivity), or both environments (differential susceptibility). In this meta-analysis we examine whether children vary in sensitivity to parenting depending on their temperament, and if so, which model can best be used to describe this sensitivity pattern. We tested whether associations between negative parenting and negative or positive child adjustment as well as between positive parenting and positive or negative child adjustment would be stronger among children higher on putative sensitivity markers (difficult temperament, negative emotionality, surgency, and effortful control). Longitudinal studies with children up to 18 years (k = 105 samples from 84 studies, Nmean = 6,153) that reported on a parenting-by-temperament interaction predicting child adjustment were included. We found 235 independent effect sizes for associations between parenting and child adjustment. Results showed that children with a more difficult temperament (compared with those with a more easy temperament) were more vulnerable to negative parenting, but also profited more from positive parenting, supporting the differential susceptibility model. Differences in susceptibility were expressed in externalizing and internalizing problems and in social and cognitive competence. Support for differential susceptibility for negative emotionality was, however, only present when this trait was assessed during infancy. Surgency and effortful control did not consistently moderate associations between parenting and child adjustment, providing little support for differential susceptibility, diathesis-stress, or vantage sensitivity models. Finally, parenting-by-temperament interactions were more pronounced when parenting was assessed using observations compared to questionnaires. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Yao, Yanbo; Duan, Xiaoshuang; Luo, Jiangjiang; Liu, Tao
2017-11-01
The use of the van der Pauw (VDP) method for characterizing and evaluating the piezoresistive behavior of carbon nanomaterial enabled piezoresistive sensors have not been systematically studied. By using single-wall carbon nanotube (SWCNT) thin films as a model system, herein we report a coupled electrical-mechanical experimental study in conjunction with a multiphysics finite element simulation as well as an analytic analysis to compare the two-probe and VDP testing configuration in evaluating the piezoresistive behavior of carbon nanomaterial enabled piezoresistive sensors. The key features regarding the sample aspect ratio dependent piezoresistive sensitivity or gauge factor were identified for the VDP testing configuration. It was found that the VDP test configuration offers consistently higher piezoresistive sensitivity than the two-probe testing method.
Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance
NASA Astrophysics Data System (ADS)
Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola
2013-04-01
Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.
Laboratory test of a novel structural model of anxiety sensitivity and panic vulnerability.
Bernstein, Amit; Zvolensky, Michael J; Zvolensky, Michael J; Schmidt, Norman B
2009-06-01
The current study evaluated a novel latent structural model of anxiety sensitivity (AS) in relation to panic vulnerability among a sample of young adults (N=216). AS was measured using the 16-item Anxiety Sensitivity Index (ASI; Reiss, Peterson, Gursky, & McNally, 1986), and panic vulnerability was indexed by panic attack responding to a single administration of a 4-minute, 10% CO(2) challenge. As predicted, vulnerability for panic attack responding to biological challenge was associated with dichotomous individual differences between taxonic AS classes and continuous within-taxon class individual differences in AS physical concerns. Findings supported the AS taxonic-dimensional hypothesis of AS latent structure and panic vulnerability. These findings are discussed in terms of their theoretical and clinical implications.
Gustafsson, Hanna C; Cox, Martha J; Blair, Clancy
2012-02-01
The current study examined the relationship between intimate partner violence (IPV), maternal parenting behaviors, and child effortful control in a diverse sample of 705 families living in predominantly low-income, rural communities. Using structural equation modeling, the authors simultaneously tested whether observed sensitive parenting and/or harsh-intrusive parenting over the toddler years mediated the relationship between early IPV and later effortful control. Results suggest that parenting behaviors fully mediate this relationship. Although higher levels of IPV were associated with both higher levels of harsh-intrusive parenting and lower levels of sensitive supportive parenting, only sensitive supportive parenting was associated with later effortful control when both parenting indices were considered in the same model.
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Chao, Yi
1996-01-01
It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Norris Reinero, Carol R; Decile, Kendra C; Berghaus, Roy D; Williams, Kurt J; Leutenegger, Christian M; Walby, William F; Schelegle, Edward S; Hyde, Dallas M; Gershwin, Laurel J
2004-10-01
Animal models are used to mimic human asthma, however, not all models replicate the major characteristics of the human disease. Spontaneous development of asthma with hallmark features similar to humans has been documented to occur with relative frequency in only one animal species, the cat. We hypothesized that we could develop an experimental model of feline asthma using clinically relevant aeroallergens identified from cases of naturally developing feline asthma, and characterize immunologic, physiologic, and pathologic changes over 1 year. House dust mite (HDMA) and Bermuda grass (BGA) allergen were selected by screening 10 privately owned pet cats with spontaneous asthma using a serum allergen-specific IgE ELISA. Parenteral sensitization and aerosol challenges were used to replicate the naturally developing disease in research cats. The asthmatic phenotype was characterized using intradermal skin testing, serum allergen-specific IgE ELISA, serum and bronchoalveolar lavage fluid (BALF) IgG and IgA ELISAs, airway hyperresponsiveness testing, BALF cytology, cytokine profiles using TaqMan PCR, and histopathologic evaluation. Sensitization with HDMA or BGA in cats led to allergen-specific IgE production, allergen-specific serum and BALF IgG and IgA production, airway hyperreactivity, airway eosinophilia, an acute T helper 2 cytokine profile in peripheral blood mononuclear cells and BALF cells, and histologic evidence of airway remodeling. Using clinically relevant aeroallergens to sensitize and challenge the cat provides an additional animal model to study the immunopathophysiologic mechanisms of allergic asthma. Chronic exposure to allergen in the cat leads to a variety of immunologic, physiologic, and pathologic changes that mimic the features seen in human asthma.
NASA Technical Reports Server (NTRS)
Shirai, T.; Ishizawa, M.; Zhuravlev, R.; Ganshin, A.; Belikov, D.; Saito, M.; Oda, T.; Valsala, V.; Gomez-Pelaez, A. J.; Langenfelds, R.;
2017-01-01
We present an assimilation system for atmospheric carbon dioxide (CO2) using a Global Eulerian-Lagrangian Coupled Atmospheric model (GELCA), and demonstrate its capability to capture the observed atmospheric CO2 mixing ratios and to estimate CO2 fluxes. With the efficient data handling scheme in GELCA, our system assimilates non-smoothed CO2 data from observational data products such as the Observation Package (ObsPack) data products as constraints on surface fluxes. We conducted sensitivity tests to examine the impact of the site selections and the prior uncertainty settings of observation on the inversion results. For these sensitivity tests, we made five different sitedata selections from the ObsPack product. In all cases, the time series of the global net CO2 flux to the atmosphere stayed close to values calculated from the growth rate of the observed global mean atmospheric CO2 mixing ratio. At regional scales, estimated seasonal CO2 fluxes were altered, depending on the CO2 data selected for assimilation. Uncertainty reductions (URs) were determined at the regional scale and compared among cases. As measures of the model-data mismatch, we used the model-data bias, root-mean-square error, and the linear correlation. For most observation sites, the model-data mismatch was reasonably small. Regarding regional flux estimates, tropical Asia was one of the regions that showed a significant impact from the observation network settings. We found that the surface fluxes in tropical Asia were the most sensitive to the use of aircraft measurements over the Pacific, and the seasonal cycle agreed better with the results of bottom-up studies when the aircraft measurements were assimilated. These results confirm the importance of these aircraft observations, especially for constraining surface fluxes in the tropics.
Husain, Shahid; Kwak, Eun Jeong; Obman, Asia; Wagener, Marilyn M; Kusne, Shimon; Stout, Janet E; McCurry, Kenneth R; Singh, Nina
2004-05-01
The clinical utility of Platelia trade mark Aspergillus galactomannan antigen for the early diagnosis of invasive aspergillosis was prospectively assessed in 70 consecutive lung transplant recipients. Sera were collected twice weekly and tested for galactomannan. Invasive aspergillosis was documented in 17.1% (12/70) of the patients. Using the generalized estimating equation model, at the cutoff value of >or= 0.5, the sensitivity of the test was 30%, specificity 93% with positive and negative likelihood ratios of 4.2 and 0.75, respectively. Increasing the cutoff value to >or= 0.66 yielded a sensitivity of 30%, specificity of 95%, and positive and negative likelihood ratios of 5.5 and 0.74. A total of 14 patients had false-positive tests, including nine who had cystic fibrosis or chronic obstructive pulmonary disease. False-positive tests occurred within 3 days of transplantation in 43% (6/14) of the patients, and within 7 days in 64% (9/14). Thus, the test demonstrated excellent specificity, but a low sensitivity for the diagnosis of aspergillosis in this patient population. Patients with cystic fibrosis or chronic obstructive pulmonary disease may transiently have a positive test in the early post-transplant period.
Acquisition of Automatic Imitation Is Sensitive to Sensorimotor Contingency
ERIC Educational Resources Information Center
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-01-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror…
EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS
The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A.; Peters, Anne
2013-01-01
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds. PMID:23118438
2010-01-01
Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705
Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M
2010-04-30
Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.
Sensitivity analysis of urban flood flows to hydraulic controls
NASA Astrophysics Data System (ADS)
Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah
2017-04-01
Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.
Cost-Effectiveness of Opt-Out Chlamydia Testing for High-Risk Young Women in the U.S.
Owusu-Edusei, Kwame; Hoover, Karen W; Gift, Thomas L
2016-08-01
In spite of chlamydia screening recommendations, U.S. testing coverage continues to be low. This study explored the cost-effectiveness of a patient-directed, universal, opportunistic Opt-Out Testing strategy (based on insurance coverage, healthcare utilization, and test acceptance probabilities) for all women aged 15-24 years compared with current Risk-Based Screening (30% coverage) from a societal perspective. Based on insurance coverage (80%); healthcare utilization (83%); and test acceptance (75%), the proposed Opt-Out Testing strategy would have an expected annual testing coverage of approximately 50% for sexually active women aged 15-24 years. A basic compartmental heterosexual transmission model was developed to account for population-level transmission dynamics. Two groups were assumed based on self-reported sexual activity. All model parameters were obtained from the literature. Costs and benefits were tracked over a 50-year period. The relative sensitivity of the estimated incremental cost-effectiveness ratios to the variables/parameters was determined. This study was conducted in 2014-2015. Based on the model, the Opt-Out Testing strategy decreased the overall chlamydia prevalence by >55% (2.7% to 1.2%). The Opt-Out Testing strategy was cost saving compared with the current Risk-Based Screening strategy. The estimated incremental cost-effectiveness ratio was most sensitive to the female pre-opt out prevalence, followed by the probability of female sequelae and discount rate. The proposed Opt-Out Testing strategy was cost saving, improving health outcomes at a lower net cost than current testing. However, testing gaps would remain because many women might not have health insurance coverage, or not utilize health care. Published by Elsevier Inc.
Hadorn, Daniela C; Racloz, Vanessa; Schwermer, Heinzpeter; Stärk, Katharina D C
2009-01-01
Vector-borne diseases pose a special challenge to veterinary authorities due to complex and time-consuming surveillance programs taking into account vector habitat. Using stochastic scenario tree modelling, each possible surveillance activity of a future surveillance system can be evaluated with regard to its sensitivity and the expected cost. The overall sensitivity of various potential surveillance systems, composed of different combinations of surveillance activities, is calculated and the proposed surveillance system is optimized with respect to the considered surveillance activities, the sensitivity and the cost. The objective of this project was to use stochastic scenario tree modelling in combination with a simple cost analysis in order to develop the national surveillance system for Bluetongue in Switzerland. This surveillance system was established due to the emerging outbreak of Bluetongue virus serotype 8 (BTV-8) in Northern Europe in 2006. Based on the modelling results, it was decided to implement an improved passive clinical surveillance in cattle and sheep through campaigns in order to increase disease awareness alongside a targeted bulk milk testing strategy in 200 dairy cattle herds located in high-risk areas. The estimated median probability of detection of cases (i.e. sensitivity) of the surveillance system in this combined approach was 96.4%. The evaluation of the prospective national surveillance system predicted that passive clinical surveillance in cattle would provide the highest probability to detect BTV-8 infected animals, followed by passive clinical surveillance in sheep and bulk milk testing of 200 dairy cattle farms in high-risk areas. This approach is also applicable in other countries and to other epidemic diseases.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Kyongho Son; Christina Tague; Carolyn Hunsaker
2016-01-01
The effect of fine-scale topographic variability on model estimates of ecohydrologic responses to climate variability in Californiaâs Sierra Nevada watersheds has not been adequately quantified and may be important for supporting reliable climate-impact assessments. This study tested the effect of digital elevation model (DEM) resolution on model accuracy and estimates...
CLIMACS: a computer model of forest stand development for western Oregon and Washington.
Virginia H. Dale; Miles Hemstrom
1984-01-01
A simulation model for the development of timber stands in the Pacific Northwest is described. The model grows individual trees of 21 species in a 0.20-hectare (0.08-acre) forest gap. The model provides a means of assimilating existing information, indicates where knowledge is deficient, suggests where the forest system is most sensitive, and provides a first testing...
2016-04-01
environment. Modeling is suitable for well- characterized parts, and stochastic modeling techniques can be used for sensitivity analysis and generating a...large cohort of trials to spot unusual cases. However, deployment repeatability is inherently a nonlinear phenomenon, which makes modeling difficult...recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. 1. Test the flight model
Minimum cost to control bovine tuberculosis in cow-calf herds
Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.
2014-01-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601
Minimum cost to control bovine tuberculosis in cow-calf herds.
Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T
2014-07-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.
Using Temperature Sensitive Paint Technology
NASA Technical Reports Server (NTRS)
Hamner, M. P.; Popernack, T. G., Jr.; Owens, L. R.; Wahls, R. A.
2002-01-01
New facilities and test techniques afford research aerodynamicists many opportunities to investigate complex aerodynamic phenomena. For example, NASA Langley Research Center's National Transonic Facility (NTF) can hold Mach number, Reynolds number, dynamic pressure, stagnation temperature and stagnation pressure constant during testing. This is important because the wing twist associated with model construction may mask important Reynolds number effects associated with the flight vehicle. Beyond this, the NTF's ability to vary Reynolds number allows for important research into the study of boundary layer transition. The capabilities of facilities such as the NTF coupled with test techniques such as temperature sensitive paint yield data that can be applied not only to vehicle design but also to validation of computational methods. Development of Luminescent Paint Technology for acquiring pressure and temperature measurements began in the mid-1980s. While pressure sensitive luminescent paints (PSP) were being developed to acquire data for aerodynamic performance and loads, temperature sensitive luminescent paints (TSP) have been used for a much broader range of applications. For example, TSP has been used to acquire surface temperature data to determine the heating due to rotating parts in various types of mechanical systems. It has been used to determine the heating pattern(s) on circuit boards. And, it has been used in boundary layer analysis and applied to the validation of full-scale flight performance predictions. That is, data acquired on the same model can be used to develop trends from off design to full scale flight Reynolds number, e.g. to show the progression of boundary layer transition. A discussion of issues related to successfully setting-up TSP tests and using TSP systems for boundary layer studies is included in this paper, as well as results from a variety of TSP tests. TSP images included in this paper are all grey-scale so that similar to pictures from sublimating chemical tests areas of laminar flow appear "lighter," or white, and areas of turbulent flow appear "darker."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valencia, Antoni; Prous, Josep; Mora, Oscar
As indicated in ICH M7 draft guidance, in silico predictive tools including statistically-based QSARs and expert analysis may be used as a computational assessment for bacterial mutagenicity for the qualification of impurities in pharmaceuticals. To address this need, we developed and validated a QSAR model to predict Salmonella t. mutagenicity (Ames assay outcome) of pharmaceutical impurities using Prous Institute's Symmetry℠, a new in silico solution for drug discovery and toxicity screening, and the Mold2 molecular descriptor package (FDA/NCTR). Data was sourced from public benchmark databases with known Ames assay mutagenicity outcomes for 7300 chemicals (57% mutagens). Of these data, 90%more » was used to train the model and the remaining 10% was set aside as a holdout set for validation. The model's applicability to drug impurities was tested using a FDA/CDER database of 951 structures, of which 94% were found within the model's applicability domain. The predictive performance of the model is acceptable for supporting regulatory decision-making with 84 ± 1% sensitivity, 81 ± 1% specificity, 83 ± 1% concordance and 79 ± 1% negative predictivity based on internal cross-validation, while the holdout dataset yielded 83% sensitivity, 77% specificity, 80% concordance and 78% negative predictivity. Given the importance of having confidence in negative predictions, an additional external validation of the model was also carried out, using marketed drugs known to be Ames-negative, and obtained 98% coverage and 81% specificity. Additionally, Ames mutagenicity data from FDA/CFSAN was used to create another data set of 1535 chemicals for external validation of the model, yielding 98% coverage, 73% sensitivity, 86% specificity, 81% concordance and 84% negative predictivity. - Highlights: • A new in silico QSAR model to predict Ames mutagenicity is described. • The model is extensively validated with chemicals from the FDA and the public domain. • Validation tests show desirable high sensitivity and high negative predictivity. • The model predicted 14 reportedly difficult to predict drug impurities with accuracy. • The model is suitable to support risk evaluation of potentially mutagenic compounds.« less
Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho
2015-04-01
Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Lianhong; Pallardy, Stephen G.; Yang, Bai
Testing complex land surface models has often proceeded by asking the question: does the model prediction agree with the observation? This approach has yet led to high-performance terrestrial models that meet the challenges of climate and ecological studies. Here we test the Community Land Model (CLM) by asking the question: does the model behave like an ecosystem? We pursue its answer by testing CLM in the ecosystem functional space (EFS) at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the Central U.S., focusing on carbon and water flux responses to precipitation regimes and associated stresses. In the observed EFS, precipitationmore » regimes and associated water and heat stresses controlled seasonal and interannual variations of net ecosystem exchange (NEE) of CO 2 and evapotranspiration in this deciduous forest ecosystem. Such controls were exerted more strongly by precipitation variability than by the total precipitation amount per se. A few simply constructed climate variability indices captured these controls, suggesting a high degree of potential predictability. While the interannual fluctuation in NEE was large, a net carbon sink was maintained even during an extreme drought year. Although CLM predicted seasonal and interanual variations in evapotranspiration reasonably well, its predictions of net carbon uptake were too small across the observed range of climate variability. Also, the model systematically underestimated the sensitivities of NEE and evapotranspiration to climate variability and overestimated the coupling strength between carbon and water fluxes. Its suspected that the modeled and observed trajectories of ecosystem fluxes did not overlap in the EFS and the model did not behave like the ecosystem it attempted to simulate. A definitive conclusion will require comprehensive parameter and structural sensitivity tests in a rigorous mathematical framework. We also suggest that future model improvements should focus on better representation and parameterization of process responses to environmental stresses and on more complete and robust representations of carbon-specific processes so that adequate responses to climate variability and a proper degree of coupling between carbon and water exchanges are captured.« less
Gu, Lianhong; Pallardy, Stephen G.; Yang, Bai; ...
2016-07-14
Testing complex land surface models has often proceeded by asking the question: does the model prediction agree with the observation? This approach has yet led to high-performance terrestrial models that meet the challenges of climate and ecological studies. Here we test the Community Land Model (CLM) by asking the question: does the model behave like an ecosystem? We pursue its answer by testing CLM in the ecosystem functional space (EFS) at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the Central U.S., focusing on carbon and water flux responses to precipitation regimes and associated stresses. In the observed EFS, precipitationmore » regimes and associated water and heat stresses controlled seasonal and interannual variations of net ecosystem exchange (NEE) of CO 2 and evapotranspiration in this deciduous forest ecosystem. Such controls were exerted more strongly by precipitation variability than by the total precipitation amount per se. A few simply constructed climate variability indices captured these controls, suggesting a high degree of potential predictability. While the interannual fluctuation in NEE was large, a net carbon sink was maintained even during an extreme drought year. Although CLM predicted seasonal and interanual variations in evapotranspiration reasonably well, its predictions of net carbon uptake were too small across the observed range of climate variability. Also, the model systematically underestimated the sensitivities of NEE and evapotranspiration to climate variability and overestimated the coupling strength between carbon and water fluxes. Its suspected that the modeled and observed trajectories of ecosystem fluxes did not overlap in the EFS and the model did not behave like the ecosystem it attempted to simulate. A definitive conclusion will require comprehensive parameter and structural sensitivity tests in a rigorous mathematical framework. We also suggest that future model improvements should focus on better representation and parameterization of process responses to environmental stresses and on more complete and robust representations of carbon-specific processes so that adequate responses to climate variability and a proper degree of coupling between carbon and water exchanges are captured.« less
Family Environments, Adrenarche, and Sexual Maturation: A Longitudinal Test of a Life History Model
ERIC Educational Resources Information Center
Ellis, Bruce J.; Essex, Marilyn J.
2007-01-01
Life history theorists have proposed that humans have evolved to be sensitive to specific features of early childhood environments and that exposure to different environments biases children toward development of different reproductive strategies, including differential pubertal timing. The current research provides a longitudinal test of this…
Equilibrium and Effective Climate Sensitivity
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Bloch-Johnson, J.
2016-12-01
Atmosphere-ocean general circulation models, as well as the real world, take thousands of years to equilibrate to CO2 induced radiative perturbations. Equilibrium climate sensitivity - a fully equilibrated 2xCO2 perturbation - has been used for decades as a benchmark in model intercomparisons, as a test of our understanding of the climate system and paleo proxies, and to predict or project future climate change. Computational costs and limited time lead to the widespread practice of extrapolating equilibrium conditions from just a few decades of coupled simulations. The most common workaround is the "effective climate sensitivity" - defined through an extrapolation of a 150 year abrupt2xCO2 simulation, including the assumption of linear climate feedbacks. The definitions of effective and equilibrium climate sensitivity are often mixed up and used equivalently, and it is argued that "transient climate sensitivity" is the more relevant measure for predicting the next decades. We present an ongoing model intercomparison, the "LongRunMIP", to study century and millennia time scales of AOGCM equilibration and the linearity assumptions around feedback analysis. As a true ensemble of opportunity, there is no protocol and the only condition to participate is a coupled model simulation of any stabilizing scenario simulating more than 1000 years. Many of the submitted simulations took several years to conduct. As of July 2016 the contribution comprises 27 scenario simulations of 13 different models originating from 7 modeling centers, each between 1000 and 6000 years. To contribute, please contact the authors as soon as possible We present preliminary results, discussing differences between effective and equilibrium climate sensitivity, the usefulness of transient climate sensitivity, extrapolation methods, and the state of the coupled climate system close to equilibrium. Caption for the Figure below: Evolution of temperature anomaly and radiative imbalance of 22 simulations with 12 models (color indicates the model). 20 year moving average.
Liles, Elizabeth G; Perrin, Nancy; Rosales, Ana G; Smith, David H; Feldstein, Adrianne C; Mosen, David M; Levin, Theodore R
2018-05-02
The fecal immunochemical test (FIT) is easier to use and more sensitive than the guaiac fecal occult blood test, but it is unclear how to optimize FIT performance. We compared the sensitivity and specificity for detecting advanced colorectal neoplasia between single-sample (1-FIT) and two-sample (2-FIT) FIT protocols at a range of hemoglobin concentration cutoffs for a positive test. We recruited 2,761 average-risk men and women ages 49-75 referred for colonoscopy within a large nonprofit, group-model health maintenance organization (HMO), and asked them to complete two separate single-sample FITs. We generated receiver-operating characteristic (ROC) curves to compare sensitivity and specificity estimates for 1-FIT and 2-FIT protocols among those who completed both FIT kits and colonoscopy. We similarly compared sensitivity and specificity between hemoglobin concentration cutoffs for a single-sample FIT. Differences in sensitivity and specificity between the 1-FIT and 2-FIT protocols were not statistically significant at any of the pre-specified hemoglobin concentration cutoffs (10, 15, 20, 25, and 30 μg/g). There was a significant difference in test performance of the one-sample FIT between 50 ng/ml (10 μg/g) and each of the higher pre-specified cutoffs. Disease prevalence was low. A two-sample FIT is not superior to a one-sample FIT in detection of advanced adenomas; the one-sample FIT at a hemoglobin concentration cutoff of 50 ng/ml (10 μg/g) is significantly more sensitive for advanced adenomas than at higher cutoffs. These findings apply to a population of younger, average-risk patients in a U.S. integrated care system with high rates of prior screening.
Evaluation of tartar control dentifrices in in vitro models of dentin sensitivity.
Mason, S; Levan, A; Crawford, R; Fisher, S; Gaffar, A
1991-01-01
The effects of anticalculus dentifrices were compared with other commercially available dentifrices in in vitro models of dentin sensitivity. Changes in the hydraulic conductance of dentin discs were measured with and without a smear layer before and after treatment and also after a post-treatment acid etch. The capacity of dentifrices to occlude open dentinal tubules in vitro was also assessed by scanning electron microscopy (SEM). There was good correlation (R = 0.98) between our test and values reported in the literature. Tartar control dentifrices gave reductions in fluid flow rates through the dentin discs comparable to those obtained with Promise, Sensodyne, Thermodent and Denquel. Additionally, tartar control dentifrices did not remove microcrystalline debris (smear layers) from the surfaces of dentin in vitro. These results were confirmed by SEM. Thus, according to the hydrodynamic theory of dentin sensitivity, these in vitro results suggest that pyrophosphate-containing dentifrices should reduce dentinal sensitivity.
Does equity sensitivity moderate the relationship between effort-reward imbalance and burnout.
Oren, Lior; Littman-Ovadia, Hadassah
2013-01-01
The model of effort-reward imbalance (ERI) received considerable research attention in the job stress literature. However, very scarce research investigated individual differences as moderators between ERI and stress. The present study is aimed at examining the combined effects of ERI, overcommitment (OVC), and the interaction between ERI and overcommitment on burnout (i.e., emotional exhaustion, cynicism, and inefficacy) and the moderating role of equity sensitivity. A questionnaire measuring ERI, burnout, and equity sensitivity was administered to 159 employees. Regression analyses were conducted to test the proposed relations and moderating hypotheses. ERI was negatively related to inefficacy and overcommitment was positively related to emotional exhaustion and cynicism. In addition, equity sensitivity was found to moderate the effect of overcommitment on emotional exhaustion and inefficacy. The findings emphasize the detrimental effect overcommitment may have on employee's mental health and suggest that the ERI model components may be closely related to perceptions of organizational justice.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold
NASA Astrophysics Data System (ADS)
Anosov, O. L.; Butkovskii, O. Ya.; Kadtke, J.; Kravtsov, Yu. A.; Protopopescu, V.
1997-05-01
We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This method could replace costly and invasive traditional methods such as gas analysis and blood tests.
True versus Apparent Malaria Infection Prevalence: The Contribution of a Bayesian Approach
Claes, Filip; Van Hong, Nguyen; Torres, Kathy; Mao, Sokny; Van den Eede, Peter; Thi Thinh, Ta; Gamboa, Dioni; Sochantha, Tho; Thang, Ngo Duc; Coosemans, Marc; Büscher, Philippe; D'Alessandro, Umberto; Berkvens, Dirk; Erhart, Annette
2011-01-01
Aims To present a new approach for estimating the “true prevalence” of malaria and apply it to datasets from Peru, Vietnam, and Cambodia. Methods Bayesian models were developed for estimating both the malaria prevalence using different diagnostic tests (microscopy, PCR & ELISA), without the need of a gold standard, and the tests' characteristics. Several sources of information, i.e. data, expert opinions and other sources of knowledge can be integrated into the model. This approach resulting in an optimal and harmonized estimate of malaria infection prevalence, with no conflict between the different sources of information, was tested on data from Peru, Vietnam and Cambodia. Results Malaria sero-prevalence was relatively low in all sites, with ELISA showing the highest estimates. The sensitivity of microscopy and ELISA were statistically lower in Vietnam than in the other sites. Similarly, the specificities of microscopy, ELISA and PCR were significantly lower in Vietnam than in the other sites. In Vietnam and Peru, microscopy was closer to the “true” estimate than the other 2 tests while as expected ELISA, with its lower specificity, usually overestimated the prevalence. Conclusions Bayesian methods are useful for analyzing prevalence results when no gold standard diagnostic test is available. Though some results are expected, e.g. PCR more sensitive than microscopy, a standardized and context-independent quantification of the diagnostic tests' characteristics (sensitivity and specificity) and the underlying malaria prevalence may be useful for comparing different sites. Indeed, the use of a single diagnostic technique could strongly bias the prevalence estimation. This limitation can be circumvented by using a Bayesian framework taking into account the imperfect characteristics of the currently available diagnostic tests. As discussed in the paper, this approach may further support global malaria burden estimation initiatives. PMID:21364745
Visualization of boundary-layer development on turbomachine blades with liquid crystals
NASA Technical Reports Server (NTRS)
Vanzante, Dale E.; Okiishi, Theodore H.
1991-01-01
This report documents a study of the use of liquid crystals to visualize boundary layer development on a turbomachine blade. A turbine blade model in a linear cascade of blades was used for the tests involved. Details of the boundary layer development on the suction surface of the turbine blade model were known from previous research. Temperature sensitive and shear sensitive liquid crystals were tried as visual agents. The temperature sensitive crystals were very effective in their ability to display the location of boundary layer flow separation and reattachment. Visualization of natural transition from laminar to turbulent boundary layer flow with the temperature sensitive crystals was possible but subtle. The visualization of separated flow reattachment with the shear sensitive crystals was easily accomplished when the crystals were allowed to make a transition from the focal-conic to a Grandjean texture. Visualization of flow reattachment based on the selective reflection properties of shear sensitive crystals was achieved only marginally because of the larger surface shear stress and shear stress gradient levels required for more dramatic color differences.
FEAST: sensitive local alignment with multiple rates of evolution.
Hudek, Alexander K; Brown, Daniel G
2011-01-01
We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.
Toward a 3D model of human brain development for studying gene/environment interactions
2013-01-01
This project aims to establish and characterize an in vitro model of the developing human brain for the purpose of testing drugs and chemicals. To accurately assess risk, a model needs to recapitulate the complex interactions between different types of glial cells and neurons in a three-dimensional platform. Moreover, human cells are preferred over cells from rodents to eliminate cross-species differences in sensitivity to chemicals. Previously, we established conditions to culture rat primary cells as three-dimensional aggregates, which will be humanized and evaluated here with induced pluripotent stem cells (iPSCs). The use of iPSCs allows us to address gene/environment interactions as well as the potential of chemicals to interfere with epigenetic mechanisms. Additionally, iPSCs afford us the opportunity to study the effect of chemicals during very early stages of brain development. It is well recognized that assays for testing toxicity in the developing brain must consider differences in sensitivity and susceptibility that arise depending on the time of exposure. This model will reflect critical developmental processes such as proliferation, differentiation, lineage specification, migration, axonal growth, dendritic arborization and synaptogenesis, which will probably display differences in sensitivity to different types of chemicals. Functional endpoints will evaluate the complex cell-to-cell interactions that are affected in neurodevelopment through chemical perturbation, and the efficacy of drug intervention to prevent or reverse phenotypes. The model described is designed to assess developmental neurotoxicity effects on unique processes occurring during human brain development by leveraging human iPSCs from diverse genetic backgrounds, which can be differentiated into different cell types of the central nervous system. Our goal is to demonstrate the feasibility of the personalized model using iPSCs derived from individuals with neurodevelopmental disorders caused by known mutations and chromosomal aberrations. Notably, such a human brain model will be a versatile tool for more complex testing platforms and strategies as well as research into central nervous system physiology and pathology. PMID:24564953
Rampton, Melanie; Walton, Shelley F; Holt, Deborah C; Pasay, Cielo; Kelly, Andrew; Currie, Bart J; McCarthy, James S; Mounsey, Kate E
2013-01-01
No commercial immunodiagnostic tests for human scabies are currently available, and existing animal tests are not sufficiently sensitive. The recombinant Sarcoptes scabiei apolipoprotein antigen Sar s 14.3 is a promising immunodiagnostic, eliciting high levels of IgE and IgG in infected people. Limited data are available regarding the temporal development of antibodies to Sar s 14.3, an issue of relevance in terms of immunodiagnosis. We utilised a porcine model to prospectively compare specific antibody responses to a primary infestation by ELISA, to Sar s 14.3 and to S. scabiei whole mite antigen extract (WMA). Differences in the antibody profile between antigens were apparent, with Sar s 14.3 responses detected earlier, and declining significantly after peak infestation compared to WMA. Both antigens resulted in >90% diagnostic sensitivity from weeks 8-16 post infestation. These data provide important information on the temporal development of humoral immune responses in scabies and further supports the development of recombinant antigen based immunodiagnostic tests for recent scabies infestations.
Evaluation of puberty by verifying spontaneous and stimulated gonadotropin values in girls.
Chin, Vivian L; Cai, Ziyong; Lam, Leslie; Shah, Bina; Zhou, Ping
2015-03-01
Changes in pharmacological agents and advancements in laboratory assays have changed the gonadotropin-releasing hormone analog stimulation test. To determine the best predictive model for detecting puberty in girls. Thirty-five girls, aged 2 years 7 months to 9 years 3 months, with central precocious puberty (CPP) (n=20) or premature thelarche/premature adrenarche (n=15). Diagnoses were based on clinical information, baseline hormones, bone age, and pelvic sonogram. Gonadotropins and E2 were analyzed using immunochemiluminometric assay. Logistic regression for CPP was performed. The best predictor of CPP is the E2-change model based on 3- to 24-h values, providing 80% sensitivity and 87% specificity. Three-hour luteinizing hormone (LH) provided 75% sensitivity and 87% specificity. Basal LH lowered sensitivity to 65% and specificity to 53%. The E2-change model provided the best predictive power; however, 3-h LH was more practical and convenient when evaluating puberty in girls.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Shields, B M; McDonald, T J; Ellard, S; Campbell, M J; Hyde, C; Hattersley, A T
2012-05-01
Diagnosing MODY is difficult. To date, selection for molecular genetic testing for MODY has used discrete cut-offs of limited clinical characteristics with varying sensitivity and specificity. We aimed to use multiple, weighted, clinical criteria to determine an individual's probability of having MODY, as a crucial tool for rational genetic testing. We developed prediction models using logistic regression on data from 1,191 patients with MODY (n = 594), type 1 diabetes (n = 278) and type 2 diabetes (n = 319). Model performance was assessed by receiver operating characteristic (ROC) curves, cross-validation and validation in a further 350 patients. The models defined an overall probability of MODY using a weighted combination of the most discriminative characteristics. For MODY, compared with type 1 diabetes, these were: lower HbA(1c), parent with diabetes, female sex and older age at diagnosis. MODY was discriminated from type 2 diabetes by: lower BMI, younger age at diagnosis, female sex, lower HbA(1c), parent with diabetes, and not being treated with oral hypoglycaemic agents or insulin. Both models showed excellent discrimination (c-statistic = 0.95 and 0.98, respectively), low rates of cross-validated misclassification (9.2% and 5.3%), and good performance on the external test dataset (c-statistic = 0.95 and 0.94). Using the optimal cut-offs, the probability models improved the sensitivity (91% vs 72%) and specificity (94% vs 91%) for identifying MODY compared with standard criteria of diagnosis <25 years and an affected parent. The models are now available online at www.diabetesgenes.org . We have developed clinical prediction models that calculate an individual's probability of having MODY. This allows an improved and more rational approach to determine who should have molecular genetic testing.
Martín-Sánchez, Ana; Valera-Marín, Guillermo; Hernández-Martínez, Adoración; Lanuza, Enrique; Martínez-García, Fernando; Agustín-Pavón, Carmen
2015-01-01
Virgin adult female mice display nearly spontaneous maternal care towards foster pups after a short period of sensitization. This indicates that maternal care is triggered by sensory stimulation provided by the pups and that its onset is largely independent on the physiological events related to gestation, parturition and lactation. Conversely, the factors influencing maternal aggression are poorly understood. In this study, we sought to characterize two models of maternal sensitization in the outbred CD1 strain. To do so, a group of virgin females (godmothers) were exposed to continuous cohabitation with a lactating dam and their pups from the moment of parturition, whereas a second group (pup-sensitized females), were exposed 2 h daily to foster pups. Both groups were tested for maternal behavior on postnatal days 2–4. Godmothers expressed full maternal care from the first test. Also, they expressed higher levels of crouching than dams. Pup-sensitized females differed from dams in all measures of pup-directed behavior in the first test, and expressed full maternal care after two sessions of contact with pups. However, both protocols failed to induce maternal aggression toward a male intruder after full onset of pup-directed maternal behavior, even in the presence of pups. Our study confirms that adult female mice need a short sensitization period before the onset of maternal care. Further, it shows that pup-oriented and non-pup-oriented components of maternal behavior are under different physiological control. We conclude that the godmother model might be useful to study the physiological and neural bases of the maternal behavior repertoire. PMID:26257621
Methamphetamine-induced behavioral sensitization in a rodent model of posttraumatic stress disorder.
Eagle, Andrew L; Perrine, Shane A
2013-07-01
Single prolonged stress (SPS) is a rodent model of posttraumatic stress disorder (PTSD)-like characteristics. Given that PTSD is frequently comorbid with substance abuse and dependence, including methamphetamine (METH), the current study sought to investigate the effects of SPS on METH-induced behavioral sensitization. In experiment 1, Sprague-Dawley rats were subject to SPS or control treatment and subsequently tested across four sessions of an escalating METH dosing paradigm. METH was injected (i.p.) in escalating doses (0, 0.032, 0.1, 0.32, 1.0, and 3.2mg/kg; dissolved in saline) every 15min and ambulatory activity was recorded. In experiment 2, SPS and control treated rats were injected (i.p.) with either saline or METH (5mg/kg) for five consecutive daily sessions and tested for stereotypy as well as ambulatory activity. Two days later, all animals were injected with a challenge dose of METH (2.5mg/kg) and again tested for activity. No differences in the acute response to METH were observed between SPS and controls. SPS enhanced METH induced ambulatory activity across sessions, compared to controls. METH-induced stereotypy increased across sessions, indicative of behavioral sensitization; however, SPS attenuated, not enhanced, this effect suggesting that SPS may prevent the development of stereotypy sensitization. Collectively, results show that SPS increases repeated METH-induced ambulatory activity while preventing the transition across sessions from ambulatory activity to stereotypy. These findings suggest that SPS alters drug-induced neuroplasticity associated with behavioral sensitization to METH, which may reflect an effect on the shared neurocircuitry underlying PTSD and substance dependence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.