Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon
2017-01-01
[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765
Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok
2017-09-30
The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition
Guenole, Nigel; Brown, Anna
2014-01-01
We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Susan L. King
2003-01-01
The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse
2017-01-01
We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
Modeled summer background concentration nutrients and ...
We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Threshold altitude resulting in decompression sickness
NASA Technical Reports Server (NTRS)
Kumar, K. V.; Waligora, James M.; Calkins, Dick S.
1990-01-01
A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.
Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar
2011-11-01
Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.
Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit
2015-01-01
The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.
2015-12-01
Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.
NASA Astrophysics Data System (ADS)
Kaiser, Olga; Martius, Olivia; Horenko, Illia
2017-04-01
Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Jerolmack, D. J.
2017-12-01
Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.
Deciphering factors controlling groundwater arsenic spatial variability in Bangladesh
NASA Astrophysics Data System (ADS)
Tan, Z.; Yang, Q.; Zheng, C.; Zheng, Y.
2017-12-01
Elevated concentrations of geogenic arsenic in groundwater have been found in many countries to exceed 10 μg/L, the WHO's guideline value for drinking water. A common yet unexplained characteristic of groundwater arsenic spatial distribution is the extensive variability at various spatial scales. This study investigates factors influencing the spatial variability of groundwater arsenic in Bangladesh to improve the accuracy of models predicting arsenic exceedance rate spatially. A novel boosted regression tree method is used to establish a weak-learning ensemble model, which is compared to a linear model using a conventional stepwise logistic regression method. The boosted regression tree models offer the advantage of parametric interaction when big datasets are analyzed in comparison to the logistic regression. The point data set (n=3,538) of groundwater hydrochemistry with 19 parameters was obtained by the British Geological Survey in 2001. The spatial data sets of geological parameters (n=13) were from the Consortium for Spatial Information, Technical University of Denmark, University of East Anglia and the FAO, while the soil parameters (n=42) were from the Harmonized World Soil Database. The aforementioned parameters were regressed to categorical groundwater arsenic concentrations below or above three thresholds: 5 μg/L, 10 μg/L and 50 μg/L to identify respective controlling factors. Boosted regression tree method outperformed logistic regression methods in all three threshold levels in terms of accuracy, specificity and sensitivity, resulting in an improvement of spatial distribution map of probability of groundwater arsenic exceeding all three thresholds when compared to disjunctive-kriging interpolated spatial arsenic map using the same groundwater arsenic dataset. Boosted regression tree models also show that the most important controlling factors of groundwater arsenic distribution include groundwater iron content and well depth for all three thresholds. The probability of a well with iron content higher than 5mg/L to contain greater than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be more than 91%, 85% and 51%, respectively, while the probability of a well from depth more than 160m to contain more than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be less than 38%, 25% and 14%, respectively.
NASA Astrophysics Data System (ADS)
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.
2009-01-01
Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343
Poverty dynamics, poverty thresholds and mortality: An age-stage Markovian model
Rehkopf, David; Tuljapurkar, Shripad; Horvitz, Carol C.
2018-01-01
Recent studies have examined the risk of poverty throughout the life course, but few have considered how transitioning in and out of poverty shape the dynamic heterogeneity and mortality disparities of a cohort at each age. Here we use state-by-age modeling to capture individual heterogeneity in crossing one of three different poverty thresholds (defined as 1×, 2× or 3× the “official” poverty threshold) at each age. We examine age-specific state structure, the remaining life expectancy, its variance, and cohort simulations for those above and below each threshold. Survival and transitioning probabilities are statistically estimated by regression analyses of data from the Health and Retirement Survey RAND data-set, and the National Longitudinal Survey of Youth. Using the results of these regression analyses, we parameterize discrete state, discrete age matrix models. We found that individuals above all three thresholds have higher annual survival than those in poverty, especially for mid-ages to about age 80. The advantage is greatest when we classify individuals based on 1× the “official” poverty threshold. The greatest discrepancy in average remaining life expectancy and its variance between those above and in poverty occurs at mid-ages for all three thresholds. And fewer individuals are in poverty between ages 40-60 for all three thresholds. Our findings are consistent with results based on other data sets, but also suggest that dynamic heterogeneity in poverty and the transience of the poverty state is associated with income-related mortality disparities (less transience, especially of those above poverty, more disparities). This paper applies the approach of age-by-stage matrix models to human demography and individual poverty dynamics. In so doing we extend the literature on individual poverty dynamics across the life course. PMID:29768416
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea
Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert
2017-01-01
Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154
Satellite rainfall retrieval by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.
1986-01-01
The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.
Novel Analog For Muscle Deconditioning
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Bloomberg, Jacob
2010-01-01
Existing models of muscle deconditioning are cumbersome and expensive (ex: bedrest). We propose a new model utilizing a weighted suit to manipulate strength, power or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre- and post-flight astronaut performance data using the same tasks. Spline regression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of: leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/ BW of 79 J/kg, knee extension (KE) isokinetic/BW of 6 Nm/Kg and KE torque/BW of 1.9 Nm/kg. Conclusions: Laboratory manipulation of strength / BW has promise as an appropriate analog for spaceflight-induced loss of muscle function for predicting occupational task performance and establishing operationally relevant exercise targets.
NASA Astrophysics Data System (ADS)
Fullard, James H.; Ter Hofstede, Hannah M.; Ratcliffe, John M.; Pollack, Gerald S.; Brigidi, Gian S.; Tinghitella, Robin M.; Zuk, Marlene
2010-01-01
The auditory thresholds of the AN2 interneuron and the behavioural thresholds of the anti-bat flight-steering responses that this cell evokes are less sensitive in female Pacific field crickets that live where bats have never existed (Moorea) compared with individuals subjected to intense levels of bat predation (Australia). In contrast, the sensitivity of the auditory interneuron, ON1 which participates in the processing of both social signals and bat calls, and the thresholds for flight orientation to a model of the calling song of male crickets show few differences between the two populations. Genetic analyses confirm that the two populations are significantly distinct, and we conclude that the absence of bats has caused partial regression in the nervous control of a defensive behaviour in this insect. This study represents the first examination of natural evolutionary regression in the neural basis of a behaviour along a selection gradient within a single species.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Major controlling factors and prediction models for arsenic uptake from soil to wheat plants.
Dai, Yunchao; Lv, Jialong; Liu, Ke; Zhao, Xiaoyan; Cao, Yingfei
2016-08-01
The application of current Chinese agriculture soil quality standards fails to evaluate the land utilization functions appropriately due to the diversity of soil properties and plant species. Therefore, the standards should be amended. A greenhouse experiment was conducted to investigate arsenic (As) enrichment in various soils from 18 Chinese provinces in parallel with As transfer to 8 wheat varieties. The goal of the study was to build and calibrate soil-wheat threshold models to forecast the As threshold of wheat soils. In Shaanxi soils, Wanmai and Jimai were the most sensitive and insensitive wheat varieties, respectively; and in Jiangxi soils, Zhengmai and Xumai were the most sensitive and insensitive wheat varieties, respectively. Relationships between soil properties and the bioconcentration factor (BCF) were built based on stepwise multiple linear regressions. Soil pH was the best predictor of BCF, and after normalizing the regression equation (Log BCF=0.2054 pH- 3.2055, R(2)=0.8474, n=14, p<0.001), we obtained a calibrated model. Using the calibrated model, a continuous soil-wheat threshold equation (HC5=10((-0.2054 pH+2.9935))+9.2) was obtained for the species-sensitive distribution curve, which was built on Chinese food safety standards. The threshold equation is a helpful tool that can be applied to estimate As uptake from soil to wheat. Copyright © 2016 Elsevier Inc. All rights reserved.
Using CART to Identify Thresholds and Hierarchies in the Determinants of Funding Decisions.
Schilling, Chris; Mortimer, Duncan; Dalziel, Kim
2017-02-01
There is much interest in understanding decision-making processes that determine funding outcomes for health interventions. We use classification and regression trees (CART) to identify cost-effectiveness thresholds and hierarchies in the determinants of funding decisions. The hierarchical structure of CART is suited to analyzing complex conditional and nonlinear relationships. Our analysis uncovered hierarchies where interventions were grouped according to their type and objective. Cost-effectiveness thresholds varied markedly depending on which group the intervention belonged to: lifestyle-type interventions with a prevention objective had an incremental cost-effectiveness threshold of $2356, suggesting that such interventions need to be close to cost saving or dominant to be funded. For lifestyle-type interventions with a treatment objective, the threshold was much higher at $37,024. Lower down the tree, intervention attributes such as the level of patient contribution and the eligibility for government reimbursement influenced the likelihood of funding within groups of similar interventions. Comparison between our CART models and previously published results demonstrated concurrence with standard regression techniques while providing additional insights regarding the role of the funding environment and the structure of decision-maker preferences.
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
Li, Yangfan; Li, Yi; Wu, Wei
2016-01-01
The concept of thresholds shows important implications for environmental and resource management. Here we derived potential landscape thresholds which indicated abrupt changes in water quality or the dividing points between exceeding and failing to meet national surface water quality standards for a rapidly urbanizing city on the Eastern Coast in China. The analysis of landscape thresholds was based on regression models linking each of the seven water quality variables to each of the six landscape metrics for this coupled land-water system. We found substantial and accelerating urban sprawl at the suburban areas between 2000 and 2008, and detected significant nonlinear relations between water quality and landscape pattern. This research demonstrated that a simple modeling technique could provide insights on environmental thresholds to support more-informed decision making in land use, water environmental and resilience management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Development of post-fire crown damage mortality thresholds in ponderosa pine
James F. Fowler; Carolyn Hull Sieg; Joel McMillin; Kurt K. Allen; Jose F. Negron; Linda L. Wadleigh; John A. Anhold; Ken E. Gibson
2010-01-01
Previous research has shown that crown scorch volume and crown consumption volume are the major predictors of post-fire mortality in ponderosa pine. In this study, we use piecewise logistic regression models of crown scorch data from 6633 trees in five wildfires from the Intermountain West to locate a mortality threshold at 88% scorch by volume for trees with no crown...
Arevalillo, Jorge M; Sztein, Marcelo B; Kotloff, Karen L; Levine, Myron M; Simon, Jakub K
2017-10-01
Immunologic correlates of protection are important in vaccine development because they give insight into mechanisms of protection, assist in the identification of promising vaccine candidates, and serve as endpoints in bridging clinical vaccine studies. Our goal is the development of a methodology to identify immunologic correlates of protection using the Shigella challenge as a model. The proposed methodology utilizes the Random Forests (RF) machine learning algorithm as well as Classification and Regression Trees (CART) to detect immune markers that predict protection, identify interactions between variables, and define optimal cutoffs. Logistic regression modeling is applied to estimate the probability of protection and the confidence interval (CI) for such a probability is computed by bootstrapping the logistic regression models. The results demonstrate that the combination of Classification and Regression Trees and Random Forests complements the standard logistic regression and uncovers subtle immune interactions. Specific levels of immunoglobulin IgG antibody in blood on the day of challenge predicted protection in 75% (95% CI 67-86). Of those subjects that did not have blood IgG at or above a defined threshold, 100% were protected if they had IgA antibody secreting cells above a defined threshold. Comparison with the results obtained by applying only logistic regression modeling with standard Akaike Information Criterion for model selection shows the usefulness of the proposed method. Given the complexity of the immune system, the use of machine learning methods may enhance traditional statistical approaches. When applied together, they offer a novel way to quantify important immune correlates of protection that may help the development of vaccines. Copyright © 2017 Elsevier Inc. All rights reserved.
[Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].
Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min
2014-01-01
To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick
2017-04-28
Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.
[Application of artificial neural networks on the prediction of surface ozone concentrations].
Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei
2011-08-01
Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.
Active Duty - U.S. Army Noise Induced Hearing Injury Surveillance Calendar Years 2009-2013
2014-06-01
rates for sensorineural hearing loss, significant threshold shift, tinnitus , and Noise-Induced Hearing Loss. The intention is to monitor the morbidity...surveillance. These code groups include sensorineural hearing loss (SNHL), significant threshold shift (STS), noise-induced hearing loss (NIHL) and tinnitus ... Tinnitus ) was analyzed using a regression model to determine the trend of incidence rates from 2007 to the current year. Statistical significance of a
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
Threshold Velocity for Saltation Activity in the Taklimakan Desert
NASA Astrophysics Data System (ADS)
Yang, Xinghua; He, Qing; Matimin, Ali; Yang, Fan; Huo, Wen; Liu, Xinchun; Zhao, Tianliang; Shen, Shuanghe
2017-12-01
The threshold velocity is an indicator of a soil's susceptibility to saltation activity and is also an important parameter in dust emission models. In this study, the saltation activity, atmospheric conditions, and soil conditions were measured from 1 August 2008 to 31 July 2009 in the Taklimakan Desert, China. the threshold velocity was estimated using the Gaussian time fraction equivalence method. At 2 m height, the 1-min averaged threshold velocity varied between 3.5 and 10.9 m/s, with a mean of 5.9 m/s. Threshold velocities varying between 4.5 and 7.5 m/s accounted for about 91.4% of all measurements. The average threshold velocity displayed clear seasonal variations in the following sequence: winter (5.1 m/s) < autumn (5.8 m/s) < spring (6.1 m/s) < summer (6.5 m/s). A regression equation of threshold velocity was established based on the relations between daily mean threshold velocity and air temperature, specific humidity, and soil volumetric moisture content. High or moderate positive correlations were found between threshold velocity and air temperature, specific humidity, and soil volumetric moisture content (air temperature r = 0.75; specific humidity r = 0.59; and soil volumetric moisture content r = 0.55; sample size = 251). In the study area, the observed horizontal dust flux was 4198.0 kg/m during the whole period of observation, while the horizontal dust flux calculated using the threshold velocity from the regression equation was 4675.6 kg/m. The correlation coefficient between the calculated result and the observations was 0.91. These results indicate that atmospheric and soil conditions should not be neglected in parameterization schemes for threshold velocity.
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Synoptic and meteorological drivers of extreme ozone concentrations over Europe
NASA Astrophysics Data System (ADS)
Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim
2016-04-01
The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2005-11-01
A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.
Patient cost-sharing, socioeconomic status, and children's health care utilization.
Nilsson, Anton; Paul, Alexander
2018-05-01
This paper estimates the effect of cost-sharing on the demand for children's and adolescents' use of medical care. We use a large population-wide registry dataset including detailed information on contacts with the health care system as well as family income. Two different estimation strategies are used: regression discontinuity design exploiting age thresholds above which fees are charged, and difference-in-differences models exploiting policy changes. We also estimate combined regression discontinuity difference-in-differences models that take into account discontinuities around age thresholds caused by factors other than cost-sharing. We find that when care is free of charge, individuals increase their number of doctor visits by 5-10%. Effects are similar in middle childhood and adolescence, and are driven by those from low-income families. The differences across income groups cannot be explained by other factors that correlate with income, such as maternal education. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Battaglin, William A.; Ulery, Randy L.; Winterstein, Thomas; Welborn, Toby
2003-01-01
In the State of Texas, surface water (streams, canals, and reservoirs) and ground water are used as sources of public water supply. Surface-water sources of public water supply are susceptible to contamination from point and nonpoint sources. To help protect sources of drinking water and to aid water managers in designing protective yet cost-effective and risk-mitigated monitoring strategies, the Texas Commission on Environmental Quality and the U.S. Geological Survey developed procedures to assess the susceptibility of public water-supply source waters in Texas to the occurrence of 227 contaminants. One component of the assessments is the determination of susceptibility of surface-water sources to nonpoint-source contamination. To accomplish this, water-quality data at 323 monitoring sites were matched with geographic information system-derived watershed- characteristic data for the watersheds upstream from the sites. Logistic regression models then were developed to estimate the probability that a particular contaminant will exceed a threshold concentration specified by the Texas Commission on Environmental Quality. Logistic regression models were developed for 63 of the 227 contaminants. Of the remaining contaminants, 106 were not modeled because monitoring data were available at less than 10 percent of the monitoring sites; 29 were not modeled because there were less than 15 percent detections of the contaminant in the monitoring data; 27 were not modeled because of the lack of any monitoring data; and 2 were not modeled because threshold values were not specified.
NASA Astrophysics Data System (ADS)
Montero, J. T.; Lintz, H. E.; Sharp, D.
2013-12-01
Do emergent properties that result from models of complex systems match emergent properties from real systems? This question targets a type of uncertainty that we argue requires more attention in system modeling and validation efforts. We define an ';emergent property' to be an attribute or behavior of a modeled or real system that can be surprising or unpredictable and result from complex interactions among the components of a system. For example, thresholds are common across diverse systems and scales and can represent emergent system behavior that is difficult to predict. Thresholds or other types of emergent system behavior can be characterized by their geometry in state space (where state space is the space containing the set of all states of a dynamic system). One way to expedite our growing mechanistic understanding of how emergent properties emerge from complex systems is to compare the geometry of surfaces in state space between real and modeled systems. Here, we present an index (threshold strength) that can quantify a geometric attribute of a surface in state space. We operationally define threshold strength as how strongly a surface in state space resembles a step or an abrupt transition between two system states. First, we validated the index for application in greater than three dimensions of state space using simulated data. Then, we demonstrated application of the index in measuring geometric state space uncertainty between a real system and a deterministic, modeled system. In particular, we looked at geometric space uncertainty between climate behavior in 20th century and modeled climate behavior simulated by global climate models (GCMs) in the Coupled Model Intercomparison Project phase 5 (CMIP5). Surfaces from the climate models came from running the models over the same domain as the real data. We also created response surfaces from a real, climate data based on an empirical model that produces a geometric surface of predicted values in state space. We used a kernel regression method designed to capture the geometry of real data pattern without imposing shape assumptions a priori on the data; this kernel regression method is known as Non-parametric Multiplicative Regression (NPMR). We found that quantifying and comparing a geometric attribute in more than three dimensions of state space can discern whether the emergent nature of complex interactions in modeled systems matches that of real systems. Further, this method has potentially wider application in contexts where searching for abrupt change or ';action' in any hyperspace is desired.
A Predictive Model for Microbial Counts on Beaches where Intertidal Sand is the Primary Source
Feng, Zhixuan; Reniers, Ad; Haus, Brian K.; Solo-Gabriele, Helena M.; Wang, John D.; Fleming, Lora E.
2015-01-01
Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions. PMID:25840869
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
A statistical model to predict one-year risk of death in patients with cystic fibrosis.
Aaron, Shawn D; Stephenson, Anne L; Cameron, Donald W; Whitmore, George A
2015-11-01
We constructed a statistical model to assess the risk of death for cystic fibrosis (CF) patients between scheduled annual clinical visits. Our model includes a CF health index that shows the influence of risk factors on CF chronic health and on the severity and frequency of CF exacerbations. Our study used Canadian CF registry data for 3,794 CF patients born after 1970. Data up to 2010 were analyzed, yielding 44,390 annual visit records. Our stochastic process model postulates that CF health between annual clinical visits is a superposition of chronic disease progression and an exacerbation shock stream. Death occurs when an exacerbation carries CF health across a critical threshold. The data constitute censored survival data, and hence, threshold regression was used to connect CF death to study covariates. Maximum likelihood estimates were used to determine which clinical covariates were included within the regression functions for both CF chronic health and CF exacerbations. Lung function, Pseudomonas aeruginosa infection, CF-related diabetes, weight deficiency, pancreatic insufficiency, and the deltaF508 homozygous mutation were significantly associated with CF chronic health status. Lung function, age, gender, age at CF diagnosis, P aeruginosa infection, body mass index <18.5, number of previous hospitalizations for CF exacerbations in the preceding year, and decline in forced expiratory volume in 1 second in the preceding year were significantly associated with CF exacerbations. When combined in one summative model, the regression functions for CF chronic health and CF exacerbation risk provided a simple clinical scoring tool for assessing 1-year risk of death for an individual CF patient. Goodness-of-fit tests of the model showed very encouraging results. We confirmed predictive validity of the model by comparing actual and estimated deaths in repeated hold-out samples from the data set and showed excellent agreement between estimated and actual mortality. Our threshold regression model incorporates a composite CF chronic health status index and an exacerbation risk index to produce an accurate clinical scoring tool for prediction of 1-year survival of CF patients. Our tool can be used by clinicians to decide on optimal timing for lung transplant referral. Copyright © 2015 Elsevier Inc. All rights reserved.
Novel Analog For Muscle Deconditioning
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob
2011-01-01
Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.
Chiang, H; Chang, K-C; Kan, H-W; Wu, S-W; Tseng, M-T; Hsueh, H-W; Lin, Y-H; Chao, C-C; Hsieh, S-T
2018-07-01
The study aimed to investigate the physiology, psychophysics, pathology and their relationship in reversible nociceptive nerve degeneration, and the physiology of acute hyperalgesia. We enrolled 15 normal subjects to investigate intraepidermal nerve fibre (IENF) density, contact heat-evoked potential (CHEP) and thermal thresholds during the capsaicin-induced skin nerve degeneration-regeneration; and CHEP and thermal thresholds at capsaicin-induced acute hyperalgesia. After 2-week capsaicin treatment, IENF density of skin was markedly reduced with reduced amplitude and prolonged latency of CHEP, and increased warm and heat pain thresholds. The time courses of skin nerve regeneration and reversal of physiology and psychophysics were different: IENF density was still lower at 10 weeks after capsaicin treatment than that at baseline, whereas CHEP amplitude and warm threshold became normalized within 3 weeks after capsaicin treatment. Although CHEP amplitude and IENF density were best correlated in a multiple linear regression model, a one-phase exponential association model showed better fit than a simple linear one, that is in the regeneration phase, the slope of the regression line between CHEP amplitude and IENF density was steeper in the subgroup with lower IENF densities than in the one with higher IENF densities. During capsaicin-induced hyperalgesia, recordable rate of CHEP to 43 °C heat stimulation was higher with enhanced CHEP amplitude and pain perception compared to baseline. There were differential restoration of IENF density, CHEP and thermal thresholds, and changed CHEP-IENF relationships during skin reinnervation. CHEP can be a physiological signature of acute hyperalgesia. These observations suggested the relationship between nociceptive nerve terminals and brain responses to thermal stimuli changed during different degree of skin denervation, and CHEP to low-intensity heat stimulus can reflect the physiology of hyperalgesia. © 2018 European Pain Federation - EFIC®.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Landslide susceptibility and early warning model for shallow landslide in Taiwan
NASA Astrophysics Data System (ADS)
Huang, Chun-Ming; Wei, Lun-Wei; Chi, Chun-Chi; Chang, Kan-Tsun; Lee, Chyi-Tyi
2017-04-01
This study aims to development a regional susceptibility model and warning threshold as well as the establishment of early warning system in order to prevent and reduce the losses caused by rainfall-induced shallow landslides in Taiwan. For the purpose of practical application, Taiwan is divided into nearly 185,000 slope units. The susceptibility and warning threshold of each slope unit were analyzed as basic information for disaster prevention. The geological characteristics, mechanism and the occurrence time of landslides were recorded for more than 900 cases through field investigation and interview of residents in order to discuss the relationship between landslides and rainfall. Logistic regression analysis was performed to evaluate the landslide susceptibility and an I3-R24 rainfall threshold model was proposed for the early warning of landslides. The validations of recent landslide cases show that the model was suitable for the warning of regional shallow landslide and most of the cases can be warned 3 to 6 hours in advanced. We also propose a slope unit area weighted method to establish local rainfall threshold on landslide for vulnerable villages in order to improve the practical application. Validations of the local rainfall threshold also show a good agreement to the occurrence time reported by newspapers. Finally, a web based "Rainfall-induced Landslide Early Warning System" is built and connected to real-time radar rainfall data so that landslide real-time warning can be achieved. Keywords: landslide, susceptibility analysis, rainfall threshold
Prediction of insufficient serum vitamin D status in older women: a validated model.
Merlijn, T; Swart, K M A; Lips, P; Heymans, M W; Sohl, E; Van Schoor, N M; Netelenbos, C J; Elders, P J M
2018-05-28
We developed an externally validated simple prediction model to predict serum 25(OH)D levels < 30, < 40, < 50 and 60 nmol/L in older women with risk factors for fractures. The benefit of the model reduces when a higher 25(OH)D threshold is chosen. Vitamin D deficiency is associated with increased fracture risk in older persons. General supplementation of all older women with vitamin D could cause medicalization and costs. We developed a clinical model to identify insufficient serum 25-hydroxyvitamin D (25(OH)D) status in older women at risk for fractures. In a sample of 2689 women ≥ 65 years selected from general practices, with at least one risk factor for fractures, a questionnaire was administered and serum 25(OH)D was measured. Multivariable logistic regression models with backward selection were developed to select predictors for insufficient serum 25(OH)D status, using separate thresholds 30, 40, 50 and 60 nmol/L. Internal and external model validations were performed. Predictors in the models were as follows: age, BMI, vitamin D supplementation, multivitamin supplementation, calcium supplementation, daily use of margarine, fatty fish ≥ 2×/week, ≥ 1 hours/day outdoors in summer, season of blood sampling, the use of a walking aid and smoking. The AUC was 0.77 for the model using a 30 nmol/L threshold and decreased in the models with higher thresholds to 0.72 for 60 nmol/L. We demonstrate that the model can help to distinguish patients with or without insufficient serum 25(OH)D levels at thresholds of 30 and 40 nmol/L, but not when a threshold of 50 nmol/L is demanded. This externally validated model can predict the presence of vitamin D insufficiency in women at risk for fractures. The potential clinical benefit of this tool is highly dependent of the chosen 25(OH)D threshold and decreases when a higher threshold is used.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Harris, L K; Whay, H R; Murrell, J C
2018-04-01
This study investigated the effects of osteoarthritis (OA) on somatosensory processing in dogs using mechanical threshold testing. A pressure algometer was used to measure mechanical thresholds in 27 dogs with presumed hind limb osteoarthritis and 28 healthy dogs. Mechanical thresholds were measured at the stifles, radii and sternum, and were correlated with scores from an owner questionnaire and a clinical checklist, a scoring system that quantified clinical signs of osteoarthritis. The effects of age and bodyweight on mechanical thresholds were also investigated. Multiple regression models indicated that, when bodyweight was taken into account, dogs with presumed osteoarthritis had lower mechanical thresholds at the stifles than control dogs, but not at other sites. Non-parametric correlations showed that clinical checklist scores and questionnaire scores were negatively correlated with mechanical thresholds at the stifles. The results suggest that mechanical threshold testing using a pressure algometer can detect primary, and possibly secondary, hyperalgesia in dogs with presumed osteoarthritis. This suggests that the mechanical threshold testing protocol used in this study might facilitate assessment of somatosensory changes associated with disease progression or response to treatment. Copyright © 2017. Published by Elsevier Ltd.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
glmnetLRC f/k/a lrc package: Logistic Regression Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-09
Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less
Rosen, Sophia; Davidov, Ori
2012-07-20
Multivariate outcomes are often measured longitudinally. For example, in hearing loss studies, hearing thresholds for each subject are measured repeatedly over time at several frequencies. Thus, each patient is associated with a multivariate longitudinal outcome. The multivariate mixed-effects model is a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, it is known that hearing thresholds, at every frequency, increase with age. Moreover, this age-related threshold elevation is monotone in frequency, that is, the higher the frequency, the higher, on average, is the rate of threshold elevation. This means that there is a natural ordering among the different frequencies in the rate of hearing loss. In practice, this amounts to imposing a set of constraints on the different frequencies' regression coefficients modeling the mean effect of time and age at entry to the study on hearing thresholds. The aforementioned constraints should be accounted for in the analysis. The result is a multivariate longitudinal model with restricted parameters. We propose estimation and testing procedures for such models. We show that ignoring the constraints may lead to misleading inferences regarding the direction and the magnitude of various effects. Moreover, simulations show that incorporating the constraints substantially improves the mean squared error of the estimates and the power of the tests. We used this methodology to analyze a real hearing loss study. Copyright © 2012 John Wiley & Sons, Ltd.
Regional regression of flood characteristics employing historical information
Tasker, Gary D.; Stedinger, J.R.
1987-01-01
Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.
Geodesic least squares regression for scaling studies in magnetic confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert
In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority ofmore » the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.« less
Warner, Kelly L.; Arnold, Terri L.
2010-01-01
Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.
Stages of Change for Fruit and Vegetable Consumption in Deprived Neighborhoods
ERIC Educational Resources Information Center
Kloek, Gitte C.; van Lenthe, Frank J.; van Nierop, Peter W. M.; Mackenbach, Johan P.
2004-01-01
This article describes the association of external and psychosocial factors on the stages of change for fruit and vegetable consumption, among 2,781 inhabitants, aged 18 to 65 years, in deprived neighborhoods (response rate 60%). To identify correlates of forward stage transition, an ordinal logistic regression model, the Threshold of Change Model…
Bécares, Laia; Nazroo, James; Jackson, James
2014-12-01
We examined the association between Black ethnic density and depressive symptoms among African Americans. We sought to ascertain whether a threshold exists in the association between Black ethnic density and an important mental health outcome, and to identify differential effects of this association across social, economic, and demographic subpopulations. We analyzed the African American sample (n = 3570) from the National Survey of American Life, which we geocoded to the 2000 US Census. We determined the threshold with a multivariable regression spline model. We examined differential effects of ethnic density with random-effects multilevel linear regressions stratified by sociodemographic characteristics. The protective association between Black ethnic density and depressive symptoms changed direction, becoming a detrimental effect, when ethnic density reached 85%. Black ethnic density was protective for lower socioeconomic positions and detrimental for the better-off categories. The masking effects of area deprivation were stronger in the highest levels of Black ethnic density. Addressing racism, racial discrimination, economic deprivation, and poor services-the main drivers differentiating ethnic density from residential segregation-will help to ensure that the racial/ethnic composition of a neighborhood is not a risk factor for poor mental health.
Nazroo, James; Jackson, James
2014-01-01
Objectives. We examined the association between Black ethnic density and depressive symptoms among African Americans. We sought to ascertain whether a threshold exists in the association between Black ethnic density and an important mental health outcome, and to identify differential effects of this association across social, economic, and demographic subpopulations. Methods. We analyzed the African American sample (n = 3570) from the National Survey of American Life, which we geocoded to the 2000 US Census. We determined the threshold with a multivariable regression spline model. We examined differential effects of ethnic density with random-effects multilevel linear regressions stratified by sociodemographic characteristics. Results. The protective association between Black ethnic density and depressive symptoms changed direction, becoming a detrimental effect, when ethnic density reached 85%. Black ethnic density was protective for lower socioeconomic positions and detrimental for the better-off categories. The masking effects of area deprivation were stronger in the highest levels of Black ethnic density. Conclusions. Addressing racism, racial discrimination, economic deprivation, and poor services—the main drivers differentiating ethnic density from residential segregation—will help to ensure that the racial/ethnic composition of a neighborhood is not a risk factor for poor mental health. PMID:25322307
NASA Astrophysics Data System (ADS)
Wu, W.; Chen, G. Y.; Kang, R.; Xia, J. C.; Huang, Y. P.; Chen, K. J.
2017-07-01
During slaughtering and further processing, chicken carcasses are inevitably contaminated by microbial pathogen contaminants. Due to food safety concerns, many countries implement a zero-tolerance policy that forbids the placement of visibly contaminated carcasses in ice-water chiller tanks during processing. Manual detection of contaminants is labor consuming and imprecise. Here, a successive projections algorithm (SPA)-multivariable linear regression (MLR) classifier based on an optimal performance threshold was developed for automatic detection of contaminants on chicken carcasses. Hyperspectral images were obtained using a hyperspectral imaging system. A regression model of the classifier was established by MLR based on twelve characteristic wavelengths (505, 537, 561, 562, 564, 575, 604, 627, 656, 665, 670, and 689 nm) selected by SPA , and the optimal threshold T = 1 was obtained from the receiver operating characteristic (ROC) analysis. The SPA-MLR classifier provided the best detection results when compared with the SPA-partial least squares (PLS) regression classifier and the SPA-least squares supported vector machine (LS-SVM) classifier. The true positive rate (TPR) of 100% and the false positive rate (FPR) of 0.392% indicate that the SPA-MLR classifier can utilize spatial and spectral information to effectively detect contaminants on chicken carcasses.
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
Yeh, Chun-Yuan; Schafferer, Christian; Lee, Jie-Min; Ho, Li-Ming; Hsieh, Chi-Jung
2017-09-21
European Union public healthcare expenditure on treating smoking and attributable diseases is estimated at over €25bn annually. The reduction of tobacco consumption has thus become one of the major social policies of the EU. This study investigates the effects of price hikes on cigarette consumption, tobacco tax revenues and smoking-caused deaths in 28 EU countries. Employing panel data for the years 2005 to 2014 from Euromonitor International, the World Bank and the World Health Organization, we used income as a threshold variable and applied threshold regression modelling to estimate the elasticity of cigarette prices and to simulate the effect of price fluctuations. The results showed that there was an income threshold effect on cigarette prices in the 28 EU countries that had a gross national income (GNI) per capita lower than US$5418, with a maximum cigarette price elasticity of -1.227. The results of the simulated analysis showed that a rise of 10% in cigarette price would significantly reduce cigarette consumption as well the total death toll caused by smoking in all the observed countries, but would be most effective in Bulgaria and Romania, followed by Latvia and Poland. Additionally, an increase in the number of MPOWER tobacco control policies at the highest level of achievment would help reduce cigarette consumption. It is recommended that all EU countries levy higher tobacco taxes to increase cigarette prices, and thus in effect reduce cigarette consumption. The subsequent increase in tobacco tax revenues would be instrumental in covering expenditures related to tobacco prevention and control programs.
NASA Astrophysics Data System (ADS)
Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.
2017-06-01
In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.
Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C
2018-04-01
Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Venta, Kimberly; Baker, Erin; Fidopiastis, Cali; Stanney, Kay
2017-12-01
The purpose of this study was to investigate the potential of developing an EHR-based model of physician competency, named the Skill Deficiency Evaluation Toolkit for Eliminating Competency-loss Trends (Skill-DETECT), which presents the opportunity to use EHR-based models to inform selection of Continued Medical Education (CME) opportunities specifically targeted at maintaining proficiency. The IBM Explorys platform provided outpatient Electronic Health Records (EHRs) representing 76 physicians with over 5000 patients combined. These data were used to develop the Skill-DETECT model, a predictive hybrid model composed of a rule-based model, logistic regression model, and a thresholding model, which predicts cognitive clinical skill deficiencies in internal medicine physicians. A three-phase approach was then used to statistically validate the model performance. Subject Matter Expert (SME) panel reviews resulted in a 100% overall approval rate of the rule based model. Area under the receiver-operating characteristic curves calculated for each logistic regression curve resulted in values between 0.76 and 0.92, which indicated exceptional performance. Normality, skewness, and kurtosis were determined and confirmed that the distribution of values output from the thresholding model were unimodal and peaked, which confirmed effectiveness and generalizability. The validation has confirmed that the Skill-DETECT model has a strong ability to evaluate EHR data and support the identification of internal medicine cognitive clinical skills that are deficient or are of higher likelihood of becoming deficient and thus require remediation, which will allow both physician and medical organizations to fine tune training efforts. Copyright © 2017 Elsevier B.V. All rights reserved.
Distributed Monitoring of the R(sup 2) Statistic for Linear Regression
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.
2011-01-01
The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
NASA Astrophysics Data System (ADS)
Juranek, L. W.; Feely, R. A.; Peterson, W. T.; Alin, S. R.; Hales, B.; Lee, K.; Sabine, C. L.; Peterson, J.
2009-12-01
We developed a multiple linear regression model to robustly determine aragonite saturation state (Ωarag) from observations of temperature and oxygen (R2 = 0.987, RMS error 0.053), using data collected in the Pacific Northwest region in late May 2007. The seasonal evolution of Ωarag near central Oregon was evaluated by applying the regression model to a monthly (winter)/bi-weekly (summer) water-column hydrographic time-series collected over the shelf and slope in 2007. The Ωarag predicted by the regression model was less than 1, the thermodynamic calcification/dissolution threshold, over shelf/slope bottom waters throughout the entire 2007 upwelling season (May-November), with the Ωarag = 1 horizon shoaling to 30 m by late summer. The persistence of water with Ωarag < 1 on the continental shelf has not been previously noted and could have notable ecological consequences for benthic and pelagic calcifying organisms such as mussels, oysters, abalone, echinoderms, and pteropods.
Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W
2016-02-15
Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
Boys, C A; Robinson, W; Miller, B; Pflugrath, B; Baumgartner, L J; Navarro, A; Brown, R; Deng, Z
2016-05-01
A piecewise regression approach was used to objectively quantify barotrauma injury thresholds in two physoclistous species, Murray cod Maccullochella peelii and silver perch Bidyanus bidyanus, following simulated infrastructure passage in a barometric chamber. The probability of injuries such as swimbladder rupture, exophthalmia and haemorrhage, and emphysema in various organs increased as the ratio between the lowest exposure pressure and the acclimation pressure (ratio of pressure change, R(NE:A) ) reduced. The relationship was typically non-linear and piecewise regression was able to quantify thresholds in R(NE:A) that once exceeded resulted in a substantial increase in barotrauma injury. Thresholds differed among injury types and between species but by applying a multispecies precautionary principle, the maintenance of exposure pressures at river infrastructure above 70% of acclimation pressure (R(NE:A) of 0·7) should protect downstream migrating juveniles of these two physoclistous species sufficiently. These findings have important implications for determining the risk posed by current infrastructures and informing the design and operation of new ones. © 2016 The Fisheries Society of the British Isles.
Hitt, Nathaniel P.; Floyd, Michael; Compton, Michael; McDonald, Kenneth
2016-01-01
Chrosomus cumberlandensis (Blackside Dace [BSD]) and Etheostoma spilotum (Kentucky Arrow Darter [KAD]) are fish species of conservation concern due to their fragmented distributions, their low population sizes, and threats from anthropogenic stressors in the southeastern United States. We evaluated the relationship between fish abundance and stream conductivity, an index of environmental quality and potential physiological stressor. We modeled occurrence and abundance of KAD in the upper Kentucky River basin (208 samples) and BSD in the upper Cumberland River basin (294 samples) for sites sampled between 2003 and 2013. Segmented regression indicated a conductivity change-point for BSD abundance at 343 μS/cm (95% CI: 123–563 μS/cm) and for KAD abundance at 261 μS/cm (95% CI: 151–370 μS/cm). In both cases, abundances were negligible above estimated conductivity change-points. Post-hoc randomizations accounted for variance in estimated change points due to unequal sample sizes across the conductivity gradients. Boosted regression-tree analysis indicated stronger effects of conductivity than other natural and anthropogenic factors known to influence stream fishes. Boosted regression trees further indicated threshold responses of BSD and KAD occurrence to conductivity gradients in support of segmented regression results. We suggest that the observed conductivity relationship may indicate energetic limitations for insectivorous fishes due to changes in benthic macroinvertebrate community composition.
Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross
2012-01-01
Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173
Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.
2013-01-01
The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891
Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly
2016-12-01
To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
A method for managing re-identification risk from small geographic areas in Canada
2010-01-01
Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers) approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs) by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes). Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%), depending on the mitigating controls that the data recipients have in place, the potential invasion of privacy if the data is disclosed, and the motives and capacity of the data recipient to re-identify the data. Conclusion The models we developed can be used to manage the re-identification risk from small geographic areas. Being able to choose among three possible thresholds, a data custodian can adjust the definition of "small geographic area" to the nature of the data and recipient. PMID:20361870
The Management Standards Indicator Tool and evaluation of burnout.
Ravalier, J M; McVicar, A; Munn-Giddings, C
2013-03-01
Psychosocial hazards in the workplace can impact upon employee health. The UK Health and Safety Executive's (HSE) Management Standards Indicator Tool (MSIT) appears to have utility in relation to health impacts but we were unable to find studies relating it to burnout. To explore the utility of the MSIT in evaluating risk of burnout assessed by the Maslach Burnout Inventory-General Survey (MBI-GS). This was a cross-sectional survey of 128 borough council employees. MSIT data were analysed according to MSIT and MBI-GS threshold scores and by using multivariate linear regression with MBI-GS factors as dependent variables. MSIT factor scores were gradated according to categories of risk of burnout according to published MBI-GS thresholds, and identified priority workplace concerns as demands, relationships, role and change. These factors also featured as significant independent variables, with control, in outcomes of the regression analysis. Exhaustion was associated with demands and control (adjusted R (2) = 0.331); cynicism was associated with change, role and demands (adjusted R (2) =0.429); and professional efficacy was associated with managerial support, role, control and demands (adjusted R (2) = 0.413). MSIT analysis generally has congruence with MBI-GS assessment of burnout. The identification of control within regression models but not as a priority concern in the MSIT analysis could suggest an issue of the setting of the MSIT thresholds for this factor, but verification requires a much larger study. Incorporation of relationship, role and change into the MSIT, missing from other conventional tools, appeared to add to its validity.
Modeling heat stress effect on Holstein cows under hot and dry conditions: selection tools.
Carabaño, M J; Bachagha, K; Ramón, M; Díaz, C
2014-12-01
Data from milk recording of Holstein-Friesian cows together with weather information from 2 regions in Southern Spain were used to define the models that can better describe heat stress response for production traits and somatic cell score (SCS). Two sets of analyses were performed, one aimed at defining the population phenotypic response and the other at studying the genetic components. The first involved 2,514,762 test-day records from up to 5 lactations of 128,112 cows. Two models, one fitting a comfort threshold for temperature and a slope of decay after the threshold, and the other a cubic Legendre polynomial (LP) model were tested. Average (TAVE) and maximum daily temperatures were alternatively considered as covariates. The LP model using TAVE as covariate showed the best goodness of fit for all traits. Estimated rates of decay from this model for production at 25 and 34°C were 36 and 170, 3.8 and 3.0, and 3.9 and 8.2g/d per degree Celsius for milk, fat, and protein yield, respectively. In the second set of analyses, a sample of 280,958 test-day records from first lactations of 29,114 cows was used. Random regression models including quadratic or cubic LP regressions (TEM_) on TAVE or a fixed threshold and an unknown slope (DUMMY), including or not cubic regressions on days in milk (DIM3_), were tested. For milk and SCS, the best models were the DIM3_ models. In contrast, for fat and protein yield, the best model was TEM3. The DIM3DUMMY models showed similar performance to DIM3TEM3. The estimated genetic correlations between the same trait under cold and hot temperatures (ρ) indicated the existence of a large genotype by environment interaction for fat (ρ=0.53 for model TEM3) and protein yield (ρ around 0.6 for DIM3TEM3) and for SCS (ρ=0.64 for model DIM3TEM3), and a small genotype by environment interaction for milk (ρ over 0.8). The eigendecomposition of the additive genetic covariance matrix from model TEM3 showed the existence of a dominant component, a constant term that is not affected by temperature, representing from 64% of the variation for SCS to 91% of the variation for milk. The second component, showing a flat pattern at intermediate temperatures and increasing or decreasing slopes for the extremes, gathered 15, 11, and 24% of the variation for fat and protein yield and SCS, respectively. This component could be further evaluated as a selection criterion for heat tolerance independently of the production level. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-01-01
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-10-11
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.
2013-01-01
Cheney Reservoir, located in south-central Kansas, is one of the primary water supplies for the city of Wichita, Kansas. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station in Cheney Reservoir since 2001; continuously measured physicochemical properties include specific conductance, pH, water temperature, dissolved oxygen, turbidity, fluorescence (wavelength range 650 to 700 nanometers; estimate of total chlorophyll), and reservoir elevation. Discrete water-quality samples were collected during 2001 through 2009 and analyzed for sediment, nutrients, taste-and-odor compounds, cyanotoxins, phytoplankton community composition, actinomycetes bacteria, and other water-quality measures. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physicochemical properties to compute concentrations of constituents that are not easily measured in real time. The water-quality information in this report is important to the city of Wichita because it allows quantification and characterization of potential constituents of concern in Cheney Reservoir. This report updates linear regression models published in 2006 that were based on data collected during 2001 through 2003. The update uses discrete and continuous data collected during May 2001 through December 2009. Updated models to compute dissolved solids, sodium, chloride, and suspended solids were similar to previously published models. However, several other updated models changed substantially from previously published models. In addition to updating relations that were previously developed, models also were developed for four new constituents, including magnesium, dissolved phosphorus, actinomycetes bacteria, and the cyanotoxin microcystin. In addition, a conversion factor of 0.74 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the Cheney Reservoir site. Because a high percentage of geosmin and microcystin data were below analytical detection thresholds (censored data), multiple logistic regression was used to develop models that best explained the probability of geosmin and microcystin concentrations exceeding relevant thresholds. The geosmin and microcystin models are particularly important because geosmin is a taste-and-odor compound and microcystin is a cyanotoxin.
He, Y J; Li, X T; Fan, Z Q; Li, Y L; Cao, K; Sun, Y S; Ouyang, T
2018-01-23
Objective: To construct a dynamic enhanced MR based predictive model for early assessing pathological complete response (pCR) to neoadjuvant therapy in breast cancer, and to evaluate the clinical benefit of the model by using decision curve. Methods: From December 2005 to December 2007, 170 patients with breast cancer treated with neoadjuvant therapy were identified and their MR images before neoadjuvant therapy and at the end of the first cycle of neoadjuvant therapy were collected. Logistic regression model was used to detect independent factors for predicting pCR and construct the predictive model accordingly, then receiver operating characteristic (ROC) curve and decision curve were used to evaluate the predictive model. Results: ΔArea(max) and Δslope(max) were independent predictive factors for pCR, OR =0.942 (95% CI : 0.918-0.967) and 0.961 (95% CI : 0.940-0.987), respectively. The area under ROC curve (AUC) for the constructed model was 0.886 (95% CI : 0.820-0.951). Decision curve showed that in the range of the threshold probability above 0.4, the predictive model presented increased net benefit as the threshold probability increased. Conclusions: The constructed predictive model for pCR is of potential clinical value, with an AUC>0.85. Meanwhile, decision curve analysis indicates the constructed predictive model has net benefit from 3 to 8 percent in the likely range of probability threshold from 80% to 90%.
Salivary Cortisol and Cold Pain Sensitivity in Female Twins
Godfrey, Kathryn M; Strachan, Eric; Dansie, Elizabeth; Crofford, Leslie J; Buchwald, Dedra; Goldberg, Jack; Poeschla, Brian; Succop, Annemarie; Noonan, Carolyn; Afari, Niloofar
2013-01-01
Background There is a dearth of knowledge about the link between cortisol and pain sensitivity. Purpose We examined the association of salivary cortisol with indices of cold pain sensitivity in 198 female twins and explored the role of familial confounding. Methods Three-day saliva samples were collected for cortisol levels and a cold pressor test was used to collect pain ratings and time to threshold and tolerance. Linear regression modeling with generalized estimating equations examined the overall and within-pair associations. Results Lower diurnal variation of cortisol was associated with higher pain ratings at threshold (p = 0.02) and tolerance (p < 0.01). The relationship of diurnal variation with pain ratings at threshold and tolerance was minimally influenced by familial factors (i.e., genetics and common environment). Conclusions Understanding the genetic and non-genetic mechanisms underlying the link between HPA axis dysregulation and pain sensitivity may help to prevent chronic pain development and maintenance. PMID:23955075
Relationship between Auditory and Cognitive Abilities in Older Adults
Sheft, Stanley
2015-01-01
Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423
Wang, Hui; Liu, Huifang; Cao, Zhiyong; Wang, Bowen
2016-01-01
This paper presents a new perspective that there is a double-threshold effect in terms of the technology gap existing in the foreign direct investment (FDI) technology spillover process in different regional Chinese industrial sectors. In this paper, a double-threshold regression model was established to examine the relation between the threshold effect of the technology gap and technology spillover. Based on the provincial panel data of Chinese industrial sectors from 2000 to 2011, the empirical results reveal that there are two threshold values, which are 1.254 and 2.163, in terms of the technology gap in the industrial sector in eastern China. There are also two threshold values in both the central and western industrial sector, which are 1.516, 2.694 and 1.635, 2.714, respectively. The technology spillover is a decreasing function of the technology gap in both the eastern and western industrial sectors, but a concave curve function of the technology gap is in the central industrial sectors. Furthermore, the FDI technology spillover has increased gradually in recent years. Based on the empirical results, suggestions were proposed to elucidate the introduction of the FDI and the improvement in the industrial added value in different regions of China.
Precipitation phase partitioning variability across the Northern Hemisphere
NASA Astrophysics Data System (ADS)
Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.
2017-12-01
Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.
Thermosensitivity is reduced during fever induced by Staphylococcus aureus cells walls in rabbits.
Tøien, Ø; Mercer, J B
1996-05-01
Thermosensitivity (TS) and threshold core temperature for metabolic cold defence were determined in six conscious rabbits before, and at seven different times after i.v. injection of killed Staphylococcus aureus (8 x 10(7) or 2 x 10(7) cell walls x kg(-1)) by exposure to short periods (5-10 min) of body cooling. Heat was extracted with a chronically implanted intravascular heat exchanger. TS was calculated by regression of metabolic heat production (M) and core temperature, as indicated by hypothalamic temperature. Threshold for cold defence (shivering threshold) was calculated as the core temperature at which the thermosensitivity line crossed preinjection resting M. The shivering thresholds followed the shape of the fever response. TS was significantly reduced (up to 49%) during the time course of fever induced by the highest dose of pyrogen only. At both high and low doses of pyrogen TS correlated negatively with shivering threshold (r = 0.66 and 0.79 respectively) with similar slopes. The reduction in TS during fever was thus associated with the increase in shivering threshold resulting from the pyrogen injection and not by the dose of pyrogen. Model considerations indicate, however, that changes in sensitivity of the thermosensory input to the hypothalamic controller may affect threshold changes but cause negligible TS changes. It is more likely that the reduction in TS is effected in the specific hypothalamic effector pathways.
Determination of Cost-Effectiveness Threshold for Health Care Interventions in Malaysia.
Lim, Yen Wei; Shafie, Asrul Akmal; Chua, Gin Nie; Ahmad Hassali, Mohammed Azmi
2017-09-01
One major challenge in prioritizing health care using cost-effectiveness (CE) information is when alternatives are more expensive but more effective than existing technology. In such a situation, an external criterion in the form of a CE threshold that reflects the willingness to pay (WTP) per quality-adjusted life-year is necessary. To determine a CE threshold for health care interventions in Malaysia. A cross-sectional, contingent valuation study was conducted using a stratified multistage cluster random sampling technique in four states in Malaysia. One thousand thirteen respondents were interviewed in person for their socioeconomic background, quality of life, and WTP for a hypothetical scenario. The CE thresholds established using the nonparametric Turnbull method ranged from MYR12,810 to MYR22,840 (~US $4,000-US $7,000), whereas those estimated with the parametric interval regression model were between MYR19,929 and MYR28,470 (~US $6,200-US $8,900). Key factors that affected the CE thresholds were education level, estimated monthly household income, and the description of health state scenarios. These findings suggest that there is no single WTP value for a quality-adjusted life-year. The CE threshold estimated for Malaysia was found to be lower than the threshold value recommended by the World Health Organization. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.
2014-01-01
Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boys, Craig A.; Robinson, Wayne; Miller, Brett
2016-05-13
Barotrauma injury can occur when fish are exposed to rapid decompression during downstream passage through river infrastructure. A piecewise regression approach was used to objectively quantify barotrauma injury thresholds in two physoclistous species (Murray cod Maccullochella peelii and silver perch Bidyanus bidyanus) following simulated infrastructure passage in barometric chambers. The probability of injuries such as swim bladder rupture; exophthalmia; and haemorrhage and emphysema in various organs increased as the ratio between the lowest exposure pressure and the acclimation pressure (ratio of pressure change RPCE/A) fell. The relationship was typically non-linear and piecewise regression was able to quantify thresholds in RPCE/Amore » that once exceeded resulted in a substantial increase in barotrauma injury. Thresholds differed among injury types and between species but by applying a multi-species precautionary principle, the maintenance of exposure pressures at river infrastructure above 70% of acclimation pressure (RPCE/A of 0.7) should sufficiently protect downstream migrating juveniles of these two physoclistous species. These findings have important implications for determining the risk posed by current infrastructures and informing the design and operation of new ones.« less
A quantitative analysis to objectively appraise drought indicators and model drought impacts
NASA Astrophysics Data System (ADS)
Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.
2016-07-01
Drought monitoring and early warning is an important measure to enhance resilience towards drought. While there are numerous operational systems using different drought indicators, there is no consensus on which indicator best represents drought impact occurrence for any given sector. Furthermore, thresholds are widely applied in these indicators but, to date, little empirical evidence exists as to which indicator thresholds trigger impacts on society, the economy, and ecosystems. The main obstacle for evaluating commonly used drought indicators is a lack of information on drought impacts. Our aim was therefore to exploit text-based data from the European Drought Impact report Inventory (EDII) to identify indicators that are meaningful for region-, sector-, and season-specific impact occurrence, and to empirically determine indicator thresholds. In addition, we tested the predictability of impact occurrence based on the best-performing indicators. To achieve these aims we applied a correlation analysis and an ensemble regression tree approach, using Germany and the UK (the most data-rich countries in the EDII) as test beds. As candidate indicators we chose two meteorological indicators (Standardized Precipitation Index, SPI, and Standardized Precipitation Evaporation Index, SPEI) and two hydrological indicators (streamflow and groundwater level percentiles). The analysis revealed that accumulation periods of SPI and SPEI best linked to impact occurrence are longer for the UK compared with Germany, but there is variability within each country, among impact categories and, to some degree, seasons. The median of regression tree splitting values, which we regard as estimates of thresholds of impact occurrence, was around -1 for SPI and SPEI in the UK; distinct differences between northern/northeastern vs. southern/central regions were found for Germany. Predictions with the ensemble regression tree approach yielded reasonable results for regions with good impact data coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.
Otto, David; He, Linlin; Xia, Yanhong; Li, Yajuan; Wu, Kegong; Ning, Zhixiong; Zhao, Baixiao; Hudnell, H Kenneth; Kwok, Richard; Mumford, Judy; Geller, Andrew; Wade, Timothy
2006-03-01
This study was designed to assess the effects of exposure to arsenic in drinking water on visual and vibrotactile function in residents of the Bamen region of Inner Mongolia, China. Arsenic was measured by hydride generation atomic fluorescence. 321 participants were divided into three exposure groups- low (non-detectable-20), medium (100-300) and high (400-700 microg/l) arsenic in drinking water (AsW). Three visual tests were administered: acuity, contrast sensitivity and color discrimination (Lanthony's Desaturated 15 Hue Test). Vibration thresholds were measured with a vibrothesiometer. Vibration thresholds were significantly elevated in the high exposure group compared to other groups. Further analysis using a spline regression model suggested that the threshold for vibratory effects is between 150-170 microg/l AsW. These findings provide the first evidence that chronic exposure to arsenic in drinking water impairs vibrotactile thresholds. The results also indicate that arsenic affects neurological function well below the 1000 microg/I concentration reported by NRC (1999). No evidence of arsenic-related effects on visual function was found.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoss, F.; Fischbeck, P. S.
2014-10-01
This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.
Vukicevic, Arso M; Jovicic, Gordana R; Jovicic, Milos N; Milicevic, Vladimir L; Filipovic, Nenad D
2018-02-01
Bone injures (BI) represents one of the major health problems, together with cancer and cardiovascular diseases. Assessment of the risks associated with BI is nontrivial since fragility of human cortical bone is varying with age. Due to restrictions for performing experiments on humans, only a limited number of fracture resistance curves (R-curves) for particular ages have been reported in the literature. This study proposes a novel decision support system for the assessment of bone fracture resistance by fusing various artificial intelligence algorithms. The aim was to estimate the R-curve slope, toughness threshold and stress intensity factor using the two input parameters commonly available during a routine clinical examination: patients age and crack length. Using the data from the literature, the evolutionary assembled Artificial Neural Network was developed and used for the derivation of Linear regression (LR) models of R-curves for arbitrary age. Finally, by using the patient (age)-specific LR models and diagnosed crack size one could estimate the risk of bone fracture under given physiological conditions. Compared to the literature, we demonstrated improved performances for estimating nonlinear changes of R-curve slope (R 2 = 0.82 vs. R 2 = 0.76) and Toughness threshold with ageing (R 2 = 0.73 vs. R 2 = 0.66).
Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.
2014-01-01
Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0–10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to declare a warning, the algorithm only needs to locate the earthquake and to verify that the corresponding magnitude threshold is exceeded. The models predict that a relatively moderate M6.5–7 earthquake along the Palos Verdes, Newport-Inglewood/Rose Canyon, Elsinore or San Jacinto faults with a rupture propagating towards LA could cause ‘very strong’ to ‘severe’ shaking in the LA basin; however, warning times for these events could exceed 30 s.
Bahouth, George; Graygo, Jill; Digges, Kennerly; Schulman, Carl; Baur, Peter
2014-01-01
The objectives of this study are to (1) characterize the population of crashes meeting the Centers for Disease Control and Prevention (CDC)-recommended 20% risk of Injury Severity Score (ISS)>15 injury and (2) explore the positive and negative effects of an advanced automatic crash notification (AACN) system whose threshold for high-risk indications is 10% versus 20%. Binary logistic regression analysis was performed to predict the occurrence of motor vehicle crash injuries at both the ISS>15 and Maximum Abbreviated Injury Scale (MAIS) 3+ level. Models were trained using crash characteristics recommended by the CDC Committee on Advanced Automatic Collision Notification and Triage of the Injured Patient. Each model was used to assign the probability of severe injury (defined as MAIS 3+ or ISS>15 injury) to a subset of NASS-CDS cases based on crash attributes. Subsequently, actual AIS and ISS levels were compared with the predicted probability of injury to determine the extent to which the seriously injured had corresponding probabilities exceeding the 10% and 20% risk thresholds. Models were developed using an 80% sample of NASS-CDS data from 2002 to 2012 and evaluations were performed using the remaining 20% of cases from the same period. Within the population of seriously injured (i.e., those having one or more AIS 3 or higher injuries), the number of occupants whose injury risk did not exceed the 10% and 20% thresholds were estimated to be 11,700 and 18,600, respectively, each year using the MAIS 3+ injury model. For the ISS>15 model, 8,100 and 11,000 occupants sustained ISS>15 injuries yet their injury probability did not reach the 10% and 20% probability for severe injury respectively. Conversely, model predictions suggested that, at the 10% and 20% thresholds, 207,700 and 55,400 drivers respectively would be incorrectly flagged as injured when their injuries had not reached the AIS 3 level. For the ISS>15 model, 87,300 and 41,900 drivers would be incorrectly flagged as injured when injury severity had not reached the ISS>15 injury level. This article provides important information comparing the expected positive and negative effects of an AACN system with thresholds at the 10% and 20% levels using 2 outcome metrics. Overall, results suggest that the 20% risk threshold would not provide a useful notification to improve the quality of care for a large number of seriously injured crash victims. Alternately, a lower threshold may increase the over triage rate. Based on the vehicle damage observed for crashes reaching and exceeding the 10% risk threshold, we anticipate that rescue services would have been deployed based on current Public Safety Answering Point (PSAP) practices.
Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.
Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas
2004-08-01
The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Rainfall thresholds and susceptibility mapping for shallow landslides and debris flows in Scotland
NASA Astrophysics Data System (ADS)
Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil
2017-04-01
Shallow translational slides and debris flows (hereafter 'landslides') pose a significant threat to life and cause significant annual economic impacts (e.g. by damage and disruption of infrastructure). The focus of this research is on the definition of objective rainfall thresholds using a weather radar system and landslide susceptibility mapping. In the study area Scotland, an inventory of 75 known landslides was used for the period 2003 to 2016. First, the effect of using different rain records (i.e. time series length) on two threshold selection techniques in receiver operating characteristic (ROC) analysis was evaluated. The results show that thresholds selected by 'Threat Score' (minimising false alarms) are sensitive to rain record length and which is not routinely considered, whereas thresholds selected using 'Optimal Point' (minimising failed alarms) are not; therefore these may be suited to establishing lower limit thresholds and be of interest to those developing early warning systems. Robust thresholds are found for combinations of normalised rain duration and accumulation at 1 and 12 day's antecedence respectively; these are normalised using the rainy-day normal and an equivalent measure for rain intensity. This research indicates that, in Scotland, rain accumulation provides a better indicator than rain intensity and that landslides may be generated by threshold conditions lower than previously thought. Second, a landslide susceptibility map is constructed using a cross-validated logistic regression model. A novel element of the approach is that landslide susceptibility is calculated for individual hillslope sections. The developed thresholds and susceptibility map are combined to assess potential hazards and impacts posed to the national highway network in Scotland.
Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.
Gilmour, Heather
2016-11-16
Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.
A novel model incorporating two variability sources for describing motor evoked potentials
Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.
2014-01-01
Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287
Hsu, Pi-Shan; Chen, Chaur-Dong; Lian, Ie-Bin; Chao, Day-Yu
2015-01-01
Background Despite dengue dynamics being driven by complex interactions between human hosts, mosquito vectors and viruses that are influenced by climate factors, an operational model that will enable health authorities to anticipate the outbreak risk in a dengue non-endemic area has not been developed. The objectives of this study were to evaluate the temporal relationship between meteorological variables, entomological surveillance indices and confirmed dengue cases; and to establish the threshold for entomological surveillance indices including three mosquito larval indices [Breteau (BI), Container (CI) and House indices (HI)] and one adult index (AI) as an early warning tool for dengue epidemic. Methodology/Principal Findings Epidemiological, entomological and meteorological data were analyzed from 2005 to 2012 in Kaohsiung City, Taiwan. The successive waves of dengue outbreaks with different magnitudes were recorded in Kaohsiung City, and involved a dominant serotype during each epidemic. The annual indigenous dengue cases usually started from May to June and reached a peak in October to November. Vector data from 2005–2012 showed that the peak of the adult mosquito population was followed by a peak in the corresponding dengue activity with a lag period of 1–2 months. Therefore, we focused the analysis on the data from May to December and the high risk district, where the inspection of the immature and mature mosquitoes was carried out on a weekly basis and about 97.9% dengue cases occurred. The two-stage model was utilized here to estimate the risk and time-lag effect of annual dengue outbreaks in Taiwan. First, Poisson regression was used to select the optimal subset of variables and time-lags for predicting the number of dengue cases, and the final results of the multivariate analysis were selected based on the smallest AIC value. Next, each vector index models with selected variables were subjected to multiple logistic regression models to examine the accuracy of predicting the occurrence of dengue cases. The results suggested that Model-AI, BI, CI and HI predicted the occurrence of dengue cases with 83.8, 87.8, 88.3 and 88.4% accuracy, respectively. The predicting threshold based on individual Model-AI, BI, CI and HI was 0.97, 1.16, 1.79 and 0.997, respectively. Conclusion/Significance There was little evidence of quantifiable association among vector indices, meteorological factors and dengue transmission that could reliably be used for outbreak prediction. Our study here provided the proof-of-concept of how to search for the optimal model and determine the threshold for dengue epidemics. Since those factors used for prediction varied, depending on the ecology and herd immunity level under different geological areas, different thresholds may be developed for different countries using a similar structure of the two-stage model. PMID:26366874
Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O
2017-06-01
In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.
Yang, Xiaoe; Xiao, Wendan; Stoffella, Peter J.; Saghir, Aamir; Azam, Muhammad; Li, Tingqiang
2014-01-01
Food chain contamination by soil cadmium (Cd) through vegetable consumption poses a threat to human health. Therefore, an understanding is needed on the relationship between the phytoavailability of Cd in soils and its uptake in edible tissues of vegetables. The purpose of this study was to establish soil Cd thresholds of representative Chinese soils based on dietary toxicity to humans and develop a model to evaluate the phytoavailability of Cd to Pak choi (Brassica chinensis L.) based on soil properties. Mehlich-3 extractable Cd thresholds were more suitable for Stagnic Anthrosols, Calcareous, Ustic Cambosols, Typic Haplustalfs, Udic Ferrisols and Periudic Argosols with values of 0.30, 0.25, 0.18, 0.16, 0.15 and 0.03 mg kg−1, respectively, while total Cd is adequate threshold for Mollisols with a value of 0.86 mg kg−1. A stepwise regression model indicated that Cd phytoavailability to Pak choi was significantly influenced by soil pH, organic matter, total Zinc and Cd concentrations in soil. Therefore, since Cd accumulation in Pak choi varied with soil characteristics, they should be considered while assessing the environmental quality of soils to ensure the hygienically safe food production. PMID:25386790
Rafiq, Muhammad Tariq; Aziz, Rukhsanda; Yang, Xiaoe; Xiao, Wendan; Stoffella, Peter J; Saghir, Aamir; Azam, Muhammad; Li, Tingqiang
2014-01-01
Food chain contamination by soil cadmium (Cd) through vegetable consumption poses a threat to human health. Therefore, an understanding is needed on the relationship between the phytoavailability of Cd in soils and its uptake in edible tissues of vegetables. The purpose of this study was to establish soil Cd thresholds of representative Chinese soils based on dietary toxicity to humans and develop a model to evaluate the phytoavailability of Cd to Pak choi (Brassica chinensis L.) based on soil properties. Mehlich-3 extractable Cd thresholds were more suitable for Stagnic Anthrosols, Calcareous, Ustic Cambosols, Typic Haplustalfs, Udic Ferrisols and Periudic Argosols with values of 0.30, 0.25, 0.18, 0.16, 0.15 and 0.03 mg kg-1, respectively, while total Cd is adequate threshold for Mollisols with a value of 0.86 mg kg-1. A stepwise regression model indicated that Cd phytoavailability to Pak choi was significantly influenced by soil pH, organic matter, total Zinc and Cd concentrations in soil. Therefore, since Cd accumulation in Pak choi varied with soil characteristics, they should be considered while assessing the environmental quality of soils to ensure the hygienically safe food production.
A Practical Guide to Regression Discontinuity
ERIC Educational Resources Information Center
Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard
2012-01-01
Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…
Physical function interfering with pain and symptoms in fibromyalgia patients.
Assumpção, A; Sauer, J F; Mango, P C; Pascual Marques, A
2010-01-01
The aim of this study was to assess the relationship between variables of physical assessment - muscular strength, flexibility and dynamic balance - with pain, pain threshold, and fibromyalgia symptoms (FM). Our sample consists of 55 women, with age ranging from 30 to 55 years (mean of 46.5, (standard deviation, SD=6.6)), mean body mass index (BMI) of 28.7 (3.8) and diagnosed for FM according to the American College of Rheumatology criteria. Pain intensity was measured using a visual analogue scale (VAS) and pain threshold (PT) using Fisher's dolorimeter. FM symptoms were assessed by the Fibromyalgia Impact Questionnaire (FIQ); flexibility by the third finger to floor test (3FF); the muscular strength index (MSI) by the maximum volunteer isometric contraction at flexion and extension of right knee and elbow using a force transducer, dynamic balance by the time to get up and go (TUG) test and the functional reach test (FRT). Data were analysed using Pearson's correlation, as well as simple and multivariate regression tests, with significance level of 5%. PT and FIQ were weakly but significantly correlated with the TUG, MSI and 3FF as well as VAS with the TUG and MSI (p<0.05). VAS, PT and FIQ was not correlated with FRT. Simple regression suggests that, alone, TUG, FR, MSI and 3FF are low predictors of VAS, PT and FIQ. For the VAS, the best predictive model includes TUG and MSI, explaining 12.6% of pain variability. For TP and total symptoms, as obtained by the FIQ, most predictive model includes 3FF and MSI, which respectively respond by 30% and 21% of the variability. Muscular strength, flexibility and balance are associated with pain, pain threshold, and symptoms in FM patients.
Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2017-01-01
The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.
Odor Detection Thresholds in a Population of Older Adults
Schubert, Carla R.; Fischer, Mary E.; Pinto, A. Alex; Klein, Barbara E.K.; Klein, Ronald; Cruickshanks, Karen J.
2016-01-01
OBJECTIVE To measure odor detection thresholds and associated nasal and behavioral factors in an older adult population. STUDY DESIGN Cross-sectional cohort study METHODS Odor detection thresholds were obtained using an automated olfactometer on 832 participants, aged 68–99 (mean age 77) years in the 21-year (2013–2016) follow-up visit of the Epidemiology of Hearing Loss Study. RESULTS The mean odor detection threshold (ODT) score was 8.2 (range: 1–13; standard deviation = 2.54), corresponding to a n-butanol concentration of slightly less than 0.03%. Older participants were significantly more likely to have lower (worse) ODT scores than younger participants (p<0.001). There were no significant differences in mean ODT scores between men and women. Older age was significantly associated with worse performance in multivariable regression models and exercising at least once a week was associated with a reduced odds of having a low (≤5) ODT score. Cognitive impairment was also associated with poor performance while a history of allergies or a deviated septum were associated with better performance. CONCLUSION Odor detection threshold scores were worse in older age groups but similar between men and women in this large population of older adults. Regular exercise was associated with better odor detection thresholds adding to the evidence that decline in olfactory function with age may be partly preventable. PMID:28000220
Do poison center triage guidelines affect healthcare facility referrals?
Benson, B E; Smith, C A; McKinney, P E; Litovitz, T L; Tandberg, W D
2001-01-01
The purpose of this study was to determine the extent to which poison center triage guidelines influence healthcare facility referral rates for acute, unintentional acetaminophen-only poisoning and acute, unintentional adult formulation iron poisoning. Managers of US poison centers were interviewed by telephone to determine their center's triage threshold value (mg/kg) for acute iron and acute acetaminophen poisoning in 1997. Triage threshold values and healthcare facility referral rates were fit to a univariate logistic regression model for acetaminophen and iron using maximum likelihood estimation. Triage threshold values ranged from 120-201 mg/kg (acetaminophen) and 16-61 mg/kg (iron). Referral rates ranged from 3.1% to 24% (acetaminophen) and 3.7% to 46.7% (iron). There was a statistically significant inverse relationship between the triage value and the referral rate for acetaminophen (p < 0.001) and iron (p = 0.0013). The model explained 31.7% of the referral variation for acetaminophen but only 4.1% of the variation for iron. There is great variability in poison center triage values and referral rates for iron and acetaminophen poisoning. Guidelines can account for a meaningful proportion of referral variation. Their influence appears to be substance dependent. These data suggest that efforts to determine and utilize the highest, safe, triage threshold value could substantially decrease healthcare costs for poisonings as long as patient medical outcomes are not compromised.
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Geneletti, Sara; O'Keeffe, Aidan G; Sharples, Linda D; Richardson, Sylvia; Baio, Gianluca
2015-07-10
The regression discontinuity (RD) design is a quasi-experimental design that estimates the causal effects of a treatment by exploiting naturally occurring treatment rules. It can be applied in any context where a particular treatment or intervention is administered according to a pre-specified rule linked to a continuous variable. Such thresholds are common in primary care drug prescription where the RD design can be used to estimate the causal effect of medication in the general population. Such results can then be contrasted to those obtained from randomised controlled trials (RCTs) and inform prescription policy and guidelines based on a more realistic and less expensive context. In this paper, we focus on statins, a class of cholesterol-lowering drugs, however, the methodology can be applied to many other drugs provided these are prescribed in accordance to pre-determined guidelines. Current guidelines in the UK state that statins should be prescribed to patients with 10-year cardiovascular disease risk scores in excess of 20%. If we consider patients whose risk scores are close to the 20% risk score threshold, we find that there is an element of random variation in both the risk score itself and its measurement. We can therefore consider the threshold as a randomising device that assigns statin prescription to individuals just above the threshold and withholds it from those just below. Thus, we are effectively replicating the conditions of an RCT in the area around the threshold, removing or at least mitigating confounding. We frame the RD design in the language of conditional independence, which clarifies the assumptions necessary to apply an RD design to data, and which makes the links with instrumental variables clear. We also have context-specific knowledge about the expected sizes of the effects of statin prescription and are thus able to incorporate this into Bayesian models by formulating informative priors on our causal parameters. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio
2015-02-01
The bioconcentration factor (BCF) is an important bioaccumulation hazard assessment metric in many regulatory contexts. Its assessment is required by the REACH regulation (Registration, Evaluation, Authorization and Restriction of Chemicals) and by CLP (Classification, Labeling and Packaging). We challenged nine well-known and widely used BCF QSAR models against 851 compounds stored in an ad-hoc created database. The goodness of the regression analysis was assessed by considering the determination coefficient (R(2)) and the Root Mean Square Error (RMSE); Cooper's statistics and Matthew's Correlation Coefficient (MCC) were calculated for all the thresholds relevant for regulatory purposes (i.e. 100L/kg for Chemical Safety Assessment; 500L/kg for Classification and Labeling; 2000 and 5000L/kg for Persistent, Bioaccumulative and Toxic (PBT) and very Persistent, very Bioaccumulative (vPvB) assessment) to assess the classification, with particular attention to the models' ability to control the occurrence of false negatives. As a first step, statistical analysis was performed for the predictions of the entire dataset; R(2)>0.70 was obtained using CORAL, T.E.S.T. and EPISuite Arnot-Gobas models. As classifiers, ACD and logP-based equations were the best in terms of sensitivity, ranging from 0.75 to 0.94. External compound predictions were carried out for the models that had their own training sets. CORAL model returned the best performance (R(2)ext=0.59), followed by the EPISuite Meylan model (R(2)ext=0.58). The latter gave also the highest sensitivity on external compounds with values from 0.55 to 0.85, depending on the thresholds. Statistics were also compiled for compounds falling into the models Applicability Domain (AD), giving better performances. In this respect, VEGA CAESAR was the best model in terms of regression (R(2)=0.94) and classification (average sensitivity>0.80). This model also showed the best regression (R(2)=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. Copyright © 2014 Elsevier Inc. All rights reserved.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms.
Spierer, Oriel; Felix, Elizabeth R; McClellan, Allison L; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J; Sarantopoulos, Constantine D; Levitt, Roy C; Ehrmann, Klaus; Galor, Anat
2016-02-01
To examine associations between corneal mechanical thresholds and metrics of dry eye. This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = -0.13 to -0.27, P < 0.05 for values between -0.18 and -0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = -0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms
Spierer, Oriel; Felix, Elizabeth R.; McClellan, Allison L.; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J.; Sarantopoulos, Constantine D.; Levitt, Roy C.; Ehrmann, Klaus; Galor, Anat
2016-01-01
Purpose To examine associations between corneal mechanical thresholds and metrics of dry eye. Methods This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. Results A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = −0.13 to −0.27, P < 0.05 for values between −0.18 and −0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = −0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Conclusions Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints. PMID:26886896
NASA Astrophysics Data System (ADS)
Ebben, Matthew R.; Krieger, Ana C.
2016-03-01
The intent of this study is to develop a predictive model to convert an oxygen desaturation index (ODI) to an apnea-hypopnea index (AHI). This model will then be compared to actual AHI to determine its precision. One thousand four hundred and sixty-seven subjects given polysomnograms with concurrent pulse oximetry between April 14, 2010, and February 7, 2012, were divided into model development (n=733) and verification groups (n=734) in order to develop a predictive model of AHI using ODI. Quadratic regression was used for model development. The coefficient of determination (r2) between the actual AHI and the predicted AHI (PredAHI) was 0.80 (r=0.90), which was significant at a p<0.001. The areas under the receiver operating characteristic curve ranged from 0.96 for AHI thresholds of ≥10 and ≥15/h to 0.97 for thresholds of ≥5 and ≥30/h. The algorithm described in this paper provides a convenient and accurate way to convert ODI to a predicted AHI. This tool makes it easier for clinicians to understand oximetry data in the context of traditional measures of sleep apnea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gissi, Andrea; Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari; Lombardo, Anna
The bioconcentration factor (BCF) is an important bioaccumulation hazard assessment metric in many regulatory contexts. Its assessment is required by the REACH regulation (Registration, Evaluation, Authorization and Restriction of Chemicals) and by CLP (Classification, Labeling and Packaging). We challenged nine well-known and widely used BCF QSAR models against 851 compounds stored in an ad-hoc created database. The goodness of the regression analysis was assessed by considering the determination coefficient (R{sup 2}) and the Root Mean Square Error (RMSE); Cooper's statistics and Matthew's Correlation Coefficient (MCC) were calculated for all the thresholds relevant for regulatory purposes (i.e. 100 L/kg for Chemicalmore » Safety Assessment; 500 L/kg for Classification and Labeling; 2000 and 5000 L/kg for Persistent, Bioaccumulative and Toxic (PBT) and very Persistent, very Bioaccumulative (vPvB) assessment) to assess the classification, with particular attention to the models' ability to control the occurrence of false negatives. As a first step, statistical analysis was performed for the predictions of the entire dataset; R{sup 2}>0.70 was obtained using CORAL, T.E.S.T. and EPISuite Arnot–Gobas models. As classifiers, ACD and log P-based equations were the best in terms of sensitivity, ranging from 0.75 to 0.94. External compound predictions were carried out for the models that had their own training sets. CORAL model returned the best performance (R{sup 2}{sub ext}=0.59), followed by the EPISuite Meylan model (R{sup 2}{sub ext}=0.58). The latter gave also the highest sensitivity on external compounds with values from 0.55 to 0.85, depending on the thresholds. Statistics were also compiled for compounds falling into the models Applicability Domain (AD), giving better performances. In this respect, VEGA CAESAR was the best model in terms of regression (R{sup 2}=0.94) and classification (average sensitivity>0.80). This model also showed the best regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.« less
Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.
2013-01-01
The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained evidence that once the intelligence threshold is met, personality factors become more predictive for creativity. On the contrary, no threshold was found for creative achievement, i.e. creative achievement benefits from higher intelligence even at fairly high levels of intellectual ability. PMID:23825884
Konrad, Stephanie; Paduraru, Peggy; Romero-Barrios, Pablo; Henderson, Sarah B; Galanis, Eleni
2017-08-31
Vibrio parahaemolyticus (Vp) is a naturally occurring bacterium found in marine environments worldwide. It can cause gastrointestinal illness in humans, primarily through raw oyster consumption. Water temperatures, and potentially other environmental factors, play an important role in the growth and proliferation of Vp in the environment. Quantifying the relationships between environmental variables and indicators or incidence of Vp illness is valuable for public health surveillance to inform and enable suitable preventative measures. This study aimed to assess the relationship between environmental parameters and Vp in British Columbia (BC), Canada. The study used Vp counts in oyster meat from 2002-2015 and laboratory confirmed Vp illnesses from 2011-2015 for the province of BC. The data were matched to environmental parameters from publicly available sources, including remote sensing measurements of nighttime sea surface temperature (SST) obtained from satellite readings at a spatial resolution of 1 km. Using three separate models, this paper assessed the relationship between (1) daily SST and Vp counts in oyster meat, (2) weekly mean Vp counts in oysters and weekly Vp illnesses, and (3) weekly mean SST and weekly Vp illnesses. The effects of salinity and chlorophyll a were also evaluated. Linear regression was used to quantify the relationship between SST and Vp, and piecewise regression was used to identify SST thresholds of concern. A total of 2327 oyster samples and 293 laboratory confirmed illnesses were included. In model 1, both SST and salinity were significant predictors of log(Vp) counts in oyster meat. In model 2, the mean log(Vp) count in oyster meat was a significant predictor of Vp illnesses. In model 3, weekly mean SST was a significant predictor of weekly Vp illnesses. The piecewise regression models identified a SST threshold of approximately 14 o C for both model 1 and 3, indicating increased risk of Vp in oyster meat and Vp illnesses at higher temperatures. Monitoring of SST, particularly through readily accessible remote sensing data, could serve as a warning signal for Vp and help inform the introduction and cessation of preventative or control measures.
Hu, Jiangbi; Wang, Ronghua
2018-02-17
Guaranteeing a safe and comfortable driving workload can contribute to reducing traffic injuries. In order to provide safe and comfortable threshold values, this study attempted to classify driving workload from the aspects of human factors mainly affected by highway geometric conditions and to determine the thresholds of different workload classifications. This article stated a hypothesis that the values of driver workload change within a certain range. Driving workload scales were stated based on a comprehensive literature review. Through comparative analysis of different psychophysiological measures, heart rate variability (HRV) was chosen as the representative measure for quantifying driving workload by field experiments. Seventy-two participants (36 car drivers and 36 large truck drivers) and 6 highways with different geometric designs were selected to conduct field experiments. A wearable wireless dynamic multiparameter physiological detector (KF-2) was employed to detect physiological data that were simultaneously correlated to the speed changes recorded by a Global Positioning System (GPS) (testing time, driving speeds, running track, and distance). Through performing statistical analyses, including the distribution of HRV during the flat, straight segments and P-P plots of modified HRV, a driving workload calculation model was proposed. Integrating driving workload scales with values, the threshold of each scale of driving workload was determined by classification and regression tree (CART) algorithms. The driving workload calculation model was suitable for driving speeds in the range of 40 to 120 km/h. The experimental data of 72 participants revealed that driving workload had a significant effect on modified HRV, revealing a change in driving speed. When the driving speed was between 100 and 120 km/h, drivers showed an apparent increase in the corresponding modified HRV. The threshold value of the normal driving workload K was between -0.0011 and 0.056 for a car driver and between -0.00086 and 0.067 for a truck driver. Heart rate variability was a direct and effective index for measuring driving workload despite being affected by multiple highway alignment elements. The driving workload model and the thresholds of driving workload classifications can be used to evaluate the quality of highway geometric design. A higher quality of highway geometric design could keep driving workload within a safer and more comfortable range. This study provided insight into reducing traffic injuries from the perspective of disciplinary integration of highway engineering and human factor engineering.
Dórea, J R R; French, E A; Armentano, L E
2017-08-01
Negative energy balance is an important part of the lactation cycle, and measuring the current energy balance of a cow is useful in both applied and research settings. The objectives of this study were (1) to determine if milk fatty acid (FA) proportions were consistently related to plasma nonesterified fatty acids (NEFA); (2) to determine if an individual cow with a measured milk FA profile is above or below a NEFA concentration, (3) to test the universality of the models developed within the University of Wisconsin and US Dairy Forage Research Center cows. Blood samples were collected on the same day as milk sampling from 105 Holstein cows from 3 studies. Plasma NEFA was quantified and a threshold of 600 µEq/L was applied to classify animals above this concentration as having high NEFA (NEFA high ). Thirty milk FA proportions and 4 milk FA ratios were screened to evaluate their capacity to classify cows with NEFA high according to determined milk FA threshold. In addition, 6 linear regression models were created using individual milk FA proportions and ratios. To evaluate the universality of the linear relationship between milk FA and plasma NEFA found in the internal data set, 90 treatment means from 21 papers published in the literature were compiled to test the model predictions. From the 30 screened milk FA, the odd short-chain fatty acids (C7:0, C9:0, C11:0, and C13:0) had sensitivity slightly greater than the other short-chain fatty acids (83.3, 94.8, 80.0, and 85.9%, respectively). The sensitivities for milk FA C6:0, C8:0, C10:0, and C12:0 were 78.8, 85.3, 80.1, and 83.9%, respectively. The threshold values to detect NEFA high cows for the last group of milk FA were ≤2.0, ≤0.94, ≤1.4, and ≤1.8 g/100 g of FA, respectively. The milk FA C14:0 and C15:0 had sensitivities of 88.7 and 85.0% and a threshold of ≤6.8 and ≤0.53 g/100 g of FA, respectively. The linear regressions using the milk FA ratios C18:1 to C15:0 and C17:0 to C15:0 presented lower root mean square error (RMSE = 191 and 179 µEq/L, respectively) in comparison with individual milk FA proportions (RMSE = 194 µEq/L), C18:1 to even short-medium-chain fatty acid (C4:0-C12:0) ratio (RMSE = 220 µEq/L), and C18:1 to C14:0 (RMSE = 199 µEq/L). Models using milk FA ratios C18:1 to C15:0 and C17:0 to C15:0 had a better fit with the external data set in comparison with the other models. Plasma NEFA can be predicted by linear regression models using milk FA ratios. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Zhang, Yingtao; Wang, Tao; Liu, Kangkang; Xia, Yao; Lu, Yi; Jing, Qinlong; Yang, Zhicong; Hu, Wenbiao; Lu, Jiahai
2016-02-01
Dengue is a re-emerging infectious disease of humans, rapidly growing from endemic areas to dengue-free regions due to favorable conditions. In recent decades, Guangzhou has again suffered from several big outbreaks of dengue; as have its neighboring cities. This study aims to examine the impact of dengue epidemics in Guangzhou, China, and to develop a predictive model for Zhongshan based on local weather conditions and Guangzhou dengue surveillance information. We obtained weekly dengue case data from 1st January, 2005 to 31st December, 2014 for Guangzhou and Zhongshan city from the Chinese National Disease Surveillance Reporting System. Meteorological data was collected from the Zhongshan Weather Bureau and demographic data was collected from the Zhongshan Statistical Bureau. A negative binomial regression model with a log link function was used to analyze the relationship between weekly dengue cases in Guangzhou and Zhongshan, controlling for meteorological factors. Cross-correlation functions were applied to identify the time lags of the effect of each weather factor on weekly dengue cases. Models were validated using receiver operating characteristic (ROC) curves and k-fold cross-validation. Our results showed that weekly dengue cases in Zhongshan were significantly associated with dengue cases in Guangzhou after the treatment of a 5 weeks prior moving average (Relative Risk (RR) = 2.016, 95% Confidence Interval (CI): 1.845-2.203), controlling for weather factors including minimum temperature, relative humidity, and rainfall. ROC curve analysis indicated our forecasting model performed well at different prediction thresholds, with 0.969 area under the receiver operating characteristic curve (AUC) for a threshold of 3 cases per week, 0.957 AUC for a threshold of 2 cases per week, and 0.938 AUC for a threshold of 1 case per week. Models established during k-fold cross-validation also had considerable AUC (average 0.938-0.967). The sensitivity and specificity obtained from k-fold cross-validation was 78.83% and 92.48% respectively, with a forecasting threshold of 3 cases per week; 91.17% and 91.39%, with a threshold of 2 cases; and 85.16% and 87.25% with a threshold of 1 case. The out-of-sample prediction for the epidemics in 2014 also showed satisfactory performance. Our study findings suggest that the occurrence of dengue outbreaks in Guangzhou could impact dengue outbreaks in Zhongshan under suitable weather conditions. Future studies should focus on developing integrated early warning systems for dengue transmission including local weather and human movement.
Zhong, Buqing; Liang, Tao; Wang, Lingqing; Li, Kexin
2014-08-15
An extensive soil survey was conducted to study pollution sources and delineate contamination of heavy metals in one of the metalliferous industrial bases, in the karst areas of southwest China. A total of 597 topsoil samples were collected and the concentrations of five heavy metals, namely Cd, As (metalloid), Pb, Hg and Cr were analyzed. Stochastic models including a conditional inference tree (CIT) and a finite mixture distribution model (FMDM) were applied to identify the sources and partition the contribution from natural and anthropogenic sources for heavy metal in topsoils of the study area. Regression trees for Cd, As, Pb and Hg were proved to depend mostly on indicators of anthropogenic activities such as industrial type and distance from urban area, while the regression tree for Cr was found to be mainly influenced by the geogenic characteristics. The FMDM analysis showed that the geometric means of modeled background values for Cd, As, Pb, Hg and Cr were close to their background values previously reported in the study area, while the contamination of Cd and Hg were widespread in the study area, imposing potentially detrimental effects on organisms through the food chain. Finally, the probabilities of single and multiple heavy metals exceeding the threshold values derived from the FMDM were estimated using indicator kriging (IK) and multivariate indicator kriging (MVIK). The high probabilities exceeding the thresholds of heavy metals were associated with metalliferous production and atmospheric deposition of heavy metals transported from the urban and industrial areas. Geostatistics coupled with stochastic models provide an effective way to delineate multiple heavy metal pollution to facilitate improved environmental management. Copyright © 2014 Elsevier B.V. All rights reserved.
Manzoni, Paolo; Memo, Luigi; Mostert, Michael; Gallo, Elena; Guardione, Roberta; Maestri, Andrea; Saia, Onofrio Sergio; Opramolla, Anna; Calabrese, Sara; Tavella, Elena; Luparia, Martina; Farina, Daniele
2014-09-01
Retinopathy of prematurity (ROP) is a multifactorial disease with evidence of many associated risk factors. Erythropoietin has been reported to be associated with this disorder in a murine model, as well as in humans in some single-center reports. We reviewed the data from two large tertiary NICUs in Italy to test the hypothesis that the use of erythropoietin may be associated with the development of the most severe stages of ROP in extremely low birth weight (ELBW) neonates. Retrospective study by review of patient charts and eye examination index cards on infants with birth weight <1000g admitted to two large tertiary NICUs in Northern Italy (Sant'Anna Hospital NICU in Torino, and Ca' Foncello Hospital Neonatology in Treviso) in the years 2005 to 2007. Standard protocol of administration of EPO in the two NICUs consisted of 250 UI/kg three times a week for 6-week courses (4-week in 1001-1500g infants). Univariate analysis was performed to assess whether the use of EPO was associated with severe (threshold) ROP. A control, multivariate statistical analysis was performed by entering into a logistic regression model a number of neonatal and perinatal variables that - in univariate analysis - had been associated with threshold ROP. During the study period, 211 ELBW infants were born at the two facilities and survived till discharge. Complete data were obtained for 197 of them. Threshold retinopathy of prematurity occurred in 26.9% (29 of 108) of ELBW infants who received erythropoietin therapy, as compared with 13.5% (12 of 89) of those who did not receive erythropoietin (OR 2.35; 95% CI 1.121-4.949; p=0.02 in univariate analysis, and p=0.04 at multivariate logistic regression after controlling for the following variables: birth weight, gestational age, days on supplemental oxygen, systemic fungal infection, vaginal delivery). Use of erythropoietin was not significantly associated with other major sequelae of prematurity (intraventricular hemorrhage, bronchopulmonary dysplasia, necrotizing enterocolitis). © 2014 Elsevier Ireland Ltd. All rights reserved. Use of erythropoietin is an additional, independent predictor of threshold ROP in ELBW neonates. Larger prospective, population-based studies should further clarify the extent of this association. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Work ability in vibration-exposed workers.
Gerhardsson, L; Hagberg, M
2014-12-01
Hand-arm vibration exposure may cause hand-arm vibration syndrome (HAVS) including sensorineural disturbances. To investigate which factors had the strongest impact on work ability in vibration-exposed workers. A cross-sectional study in which vibration-exposed workers referred to a department of occupational and environmental medicine were compared with a randomized sample of unexposed subjects from the general population of the city of Gothenburg. All participants underwent a structured interview, answered several questionnaires and had a physical examination including measurements of hand and finger muscle strength and vibrotactile and thermal perception thresholds. The vibration-exposed group (47 subjects) showed significantly reduced sensitivity to cold and warmth in digit 2 bilaterally (P < 0.01) and in digit 5 in the left hand (P < 0.05) and to warmth in digit 5 in the right hand (P < 0.01), compared with the 18 referents. Similarly, tactilometry showed significantly raised vibration perception thresholds among the workers (P < 0.05). A strong relationship was found for the following multiple regression model: estimated work ability = 11.4 - 0.1 × age - 2.3 × current stress level - 2.5 × current pain in hands/arms (multiple r = 0.68; P < 0.001). Vibration-exposed workers showed raised vibrotactile and thermal perception thresholds, compared with unexposed referents. Multiple regression analysis indicated that stress disorders and muscle pain in hands/arms must also be considered when evaluating work ability among subjects with HAVS. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine.
Tremblay, François; Mireault, Annie-Claude; Dessureault, Liam; Manning, Hélène; Sveistrup, Heidi
2005-07-01
In the present report, we extend our previous observations on the effect of age on postural stabilization from fingertip contact (Exp Brain Res 157 (2004) 275) to examine the possible influence of sensory thresholds measured at the fingertip on the magnitude of contact forces. Participants (young, n=25, 19-32 years; old, n=35, 60-86 years) underwent psychophysical testing of the right index finger to determine thresholds for spatial acuity, pressure sensitivity and kinesthetic acuity. Spatial acuity was determined from the ability to detect gaps of different widths, while Semmes-Weinstein monofilaments were used for pressure sensitivity. Kinesthetic acuity was determined by asking participants to discriminate plates of different thicknesses using a thumb-index precision grip. These tests were selected on the basis that each reflected different sensory coding mechanisms (resolution of spatial stimuli, detection of mechanical forces and integration of multi-sensory inputs for hand conformation) and thus provided specific information about the integrity and function of mechanoreceptive afferents innervating the hand. After log transformation, thresholds were first examined to determine the influence of age (young and old) and gender (male, female) on tactile acuity. Sensory thresholds were then entered into multiple linear regression models to examine their ability to predict fingertip contact forces (normal and tangential) applied to a smooth surface when subjects stood with eyes closed on either a firm or a compliant support surface. As expected, age exerted a significant effect (p<0.01) on all three thresholds, but its impact was greater on spatial acuity than on pressure sensitivity or kinesthetic acuity. Gender had a marginal impact on pressure sensitivity thresholds only. The regression analyses revealed that tactile thresholds determined at the index fingertip accounted for a substantial proportion of the variance (up to 30%) seen in the contact forces deployed on the touch-plate, especially those exerted in the normal direction. The same analyses further revealed that much of the variance explained by the models arose from inter-individual differences in tactile spatial acuity and not from differences in pressure sensitivity or in kinesthetic acuity. Thus, of all three tests, the spatial acuity task was the most sensitive at detecting differences in hand sensibility both within and between age groups and, accordingly, was also better at predicting the magnitude of fingertip forces deployed for postural stabilization. Since spatial acuity is critically dependent upon innervation density, we conclude that the degree of functional innervation at the fingertip was likely an important factor in determining the capacity of older participants to use contact cues for stability purposes, forcing the most affected individuals to exert unusually high pressures in order to achieve stabilization in the presence of reduced tactile inputs arising from contact with the touched surface.
Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M
2017-06-01
Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
Zemek, Allison; Garg, Rohit; Wong, Brian J. F.
2014-01-01
Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022
Lee, Mei-Ling Ting; Whitmore, G A; Laden, Francine; Hart, Jaime E; Garshick, Eric
2009-01-01
A case-control study of lung cancer mortality in U.S. railroad workers in jobs with and without diesel exhaust exposure is reanalyzed using a new threshold regression methodology. The study included 1256 workers who died of lung cancer and 2385 controls who died primarily of circulatory system diseases. Diesel exhaust exposure was assessed using railroad job history from the US Railroad Retirement Board and an industrial hygiene survey. Smoking habits were available from next-of-kin and potential asbestos exposure was assessed by job history review. The new analysis reassesses lung cancer mortality and examines circulatory system disease mortality. Jobs with regular exposure to diesel exhaust had a survival pattern characterized by an initial delay in mortality, followed by a rapid deterioration of health prior to death. The pattern is seen in subjects dying of lung cancer, circulatory system diseases, and other causes. The unique pattern is illustrated using a new type of Kaplan-Meier survival plot in which the time scale represents a measure of disease progression rather than calendar time. The disease progression scale accounts for a healthy-worker effect when describing the effects of cumulative exposures on mortality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.
Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Mooij, Anne H; Frauscher, Birgit; Amiri, Mina; Otte, Willem M; Gotman, Jean
2016-12-01
To assess whether there is a difference in the background activity in the ripple band (80-200Hz) between epileptic and non-epileptic channels, and to assess whether this difference is sufficient for their reliable separation. We calculated mean and standard deviation of wavelet entropy in 303 non-epileptic and 334 epileptic channels from 50 patients with intracerebral depth electrodes and used these measures as predictors in a multivariable logistic regression model. We assessed sensitivity, positive predictive value (PPV) and negative predictive value (NPV) based on a probability threshold corresponding to 90% specificity. The probability of a channel being epileptic increased with higher mean (p=0.004) and particularly with higher standard deviation (p<0.0001). The performance of the model was however not sufficient for fully classifying the channels. With a threshold corresponding to 90% specificity, sensitivity was 37%, PPV was 80%, and NPV was 56%. A channel with a high standard deviation of entropy is likely to be epileptic; with a threshold corresponding to 90% specificity our model can reliably select a subset of epileptic channels. Most studies have concentrated on brief ripple events. We showed that background activity in the ripple band also has some ability to discriminate epileptic channels. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Intensity level for exercise training in fibromyalgia by using mathematical models.
Lemos, Maria Carolina D; Valim, Valéria; Zandonade, Eliana; Natour, Jamil
2010-03-22
It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 x age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients.
Intensity level for exercise training in fibromyalgia by using mathematical models
2010-01-01
Background It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. Methods A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 × age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Results Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Conclusion Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients. PMID:20307323
Mapping Critical Loads of Atmospheric Nitrogen Deposition in the Rocky Mountains, USA
NASA Astrophysics Data System (ADS)
Nanus, L.; Clow, D. W.; Stephens, V. C.; Saros, J. E.
2010-12-01
Atmospheric nitrogen (N) deposition can adversely affect sensitive aquatic ecosystems at high-elevations in the western United States. Critical loads are the amount of deposition of a given pollutant that an ecosystem can receive below which ecological effects are thought not to occur. GIS-based landscape models were used to create maps for high-elevation areas across the Rocky Mountain region showing current atmospheric deposition rates of nitrogen (N), critical loads of N, and exceedances of critical loads of N. Atmospheric N deposition maps for the region were developed at 400 meter resolution using gridded precipitation data and spatially interpolated chemical concentrations in rain and snow. Critical loads maps were developed based on chemical thresholds corresponding to observed ecological effects, and estimated ecosystem sensitivities calculated from basin characteristics. Diatom species assemblages were used as an indicator of ecosystem health to establish critical loads of N. Chemical thresholds (concentrations) were identified for surface waters by using a combination of in-situ growth experiments and observed spatial patterns in surface-water chemistry and diatom species assemblages across an N deposition gradient. Ecosystem sensitivity was estimated using a multiple-linear regression approach in which observed surface water nitrate concentrations at 530 sites were regressed against estimates of inorganic N deposition and basin characteristics (topography, soil type and amount, bedrock geology, vegetation type) to develop predictive models of surface water chemistry. Modeling results indicated that the significant explanatory variables included percent slope, soil permeability, and vegetation type (including barren land, shrub, and grassland) and were used to predict high-elevation surface water nitrate concentrations across the Rocky Mountains. Chemical threshold concentrations were substituted into an inverted form of the model equations and applied to estimate critical loads for each stream reach within a basin, from which critical loads maps were created. Atmospheric N deposition maps were overlaid on the critical loads maps to identify areas in the Rocky Mountain region where critical loads are being exceeded, or where they may do so in the future. This approach may be transferable to other high-elevation areas of the United States and the world.
NASA Technical Reports Server (NTRS)
Garner, Gregory G.; Thompson, Anne M.
2013-01-01
An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for
NASA Astrophysics Data System (ADS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-05-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.
NASA Technical Reports Server (NTRS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-01-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.
Time Poverty Thresholds and Rates for the US Population
ERIC Educational Resources Information Center
Kalenkoski, Charlene M.; Hamrick, Karen S.; Andrews, Margaret
2011-01-01
Time constraints, like money constraints, affect Americans' well-being. This paper defines what it means to be time poor based on the concepts of necessary and committed time and presents time poverty thresholds and rates for the US population and certain subgroups. Multivariate regression techniques are used to identify the key variables…
Santos, Frédéric; Guyomarc'h, Pierre; Bruzek, Jaroslav
2014-12-01
Accuracy of identification tools in forensic anthropology primarily rely upon the variations inherent in the data upon which they are built. Sex determination methods based on craniometrics are widely used and known to be specific to several factors (e.g. sample distribution, population, age, secular trends, measurement technique, etc.). The goal of this study is to discuss the potential variations linked to the statistical treatment of the data. Traditional craniometrics of four samples extracted from documented osteological collections (from Portugal, France, the U.S.A., and Thailand) were used to test three different classification methods: linear discriminant analysis (LDA), logistic regression (LR), and support vector machines (SVM). The Portuguese sample was set as a training model on which the other samples were applied in order to assess the validity and reliability of the different models. The tests were performed using different parameters: some included the selection of the best predictors; some included a strict decision threshold (sex assessed only if the related posterior probability was high, including the notion of indeterminate result); and some used an unbalanced sex-ratio. Results indicated that LR tends to perform slightly better than the other techniques and offers a better selection of predictors. Also, the use of a decision threshold (i.e. p>0.95) is essential to ensure an acceptable reliability of sex determination methods based on craniometrics. Although the Portuguese, French, and American samples share a similar sexual dimorphism, application of Western models on the Thai sample (that displayed a lower degree of dimorphism) was unsuccessful. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chen, Hung-Yuan; Chiu, Yen-Ling; Hsu, Shih-Ping; Pai, Mei-Fen; Ju-YehYang; Lai, Chun-Fu; Lu, Hui-Min; Huang, Shu-Chen; Yang, Shao-Yu; Wen, Su-Yin; Chiu, Hsien-Ching; Hu, Fu-Chang; Peng, Yu-Sen; Jee, Shiou-Hwa
2013-01-01
Background Uremic pruritus is a common and intractable symptom in patients on chronic hemodialysis, but factors associated with the severity of pruritus remain unclear. This study aimed to explore the associations of metabolic factors and dialysis adequacy with the aggravation of pruritus. Methods We conducted a 5-year prospective cohort study on patients with maintenance hemodialysis. A visual analogue scale (VAS) was used to assess the intensity of pruritus. Patient demographic and clinical characteristics, laboratory parameters, dialysis adequacy (assessed by Kt/V), and pruritus intensity were recorded at baseline and follow-up. Change score analysis of the difference score of VAS between baseline and follow-up was performed using multiple linear regression models. The optimal threshold of Kt/V, which is associated with the aggravation of uremic pruritus, was determined by generalized additive models and receiver operating characteristic analysis. Results A total of 111 patients completed the study. Linear regression analysis showed that lower Kt/V and use of low-flux dialyzer were significantly associated with the aggravation of pruritus after adjusting for the baseline pruritus intensity and a variety of confounding factors. The optimal threshold value of Kt/V for pruritus was 1.5 suggested by both generalized additive models and receiver operating characteristic analysis. Conclusions Hemodialysis with the target of Kt/V ≥1.5 and use of high-flux dialyzer may reduce the intensity of pruritus in patients on chronic hemodialysis. Further clinical trials are required to determine the optimal dialysis dose and regimen for uremic pruritus. PMID:23940749
Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim
2016-12-01
Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
Probabilistic Forecasting of Surface Ozone with a Novel Statistical Approach
NASA Technical Reports Server (NTRS)
Balashov, Nikolay V.; Thompson, Anne M.; Young, George S.
2017-01-01
The recent change in the Environmental Protection Agency's surface ozone regulation, lowering the surface ozone daily maximum 8-h average (MDA8) exceedance threshold from 75 to 70 ppbv, poses significant challenges to U.S. air quality (AQ) forecasters responsible for ozone MDA8 forecasts. The forecasters, supplied by only a few AQ model products, end up relying heavily on self-developed tools. To help U.S. AQ forecasters, this study explores a surface ozone MDA8 forecasting tool that is based solely on statistical methods and standard meteorological variables from the numerical weather prediction (NWP) models. The model combines the self-organizing map (SOM), which is a clustering technique, with a step wise weighted quadratic regression using meteorological variables as predictors for ozone MDA8. The SOM method identifies different weather regimes, to distinguish between various modes of ozone variability, and groups them according to similarity. In this way, when a regression is developed for a specific regime, data from the other regimes are also used, with weights that are based on their similarity to this specific regime. This approach, regression in SOM (REGiS), yields a distinct model for each regime taking into account both the training cases for that regime and other similar training cases. To produce probabilistic MDA8 ozone forecasts, REGiS weighs and combines all of the developed regression models on the basis of the weather patterns predicted by an NWP model. REGiS is evaluated over the San Joaquin Valley in California and the northeastern plains of Colorado. The results suggest that the model performs best when trained and adjusted separately for an individual AQ station and its corresponding meteorological site.
Skin cancer incidence among atomic bomb survivors from 1958 to 1996.
Sugiyama, Hiromi; Misumi, Munechika; Kishikawa, Masao; Iseki, Masachika; Yonehara, Shuji; Hayashi, Tomayoshi; Soda, Midori; Tokuoka, Shoji; Shimizu, Yukiko; Sakata, Ritsu; Grant, Eric J; Kasagi, Fumiyoshi; Mabuchi, Kiyohiko; Suyama, Akihiko; Ozasa, Kotaro
2014-05-01
The radiation risk of skin cancer by histological types has been evaluated in the atomic bomb survivors. We examined 80,158 of the 120,321 cohort members who had their radiation dose estimated by the latest dosimetry system (DS02). Potential skin tumors diagnosed from 1958 to 1996 were reviewed by a panel of pathologists, and radiation risk of the first primary skin cancer was analyzed by histological types using a Poisson regression model. A significant excess relative risk (ERR) of basal cell carcinoma (BCC) (n = 123) was estimated at 1 Gy (0.74, 95% confidence interval (CI): 0.26, 1.6) for those age 30 at exposure and age 70 at observation based on a linear-threshold model with a threshold dose of 0.63 Gy (95% CI: 0.32, 0.89) and a slope of 2.0 (95% CI: 0.69, 4.3). The estimated risks were 15, 5.7, 1.3 and 0.9 for age at exposure of 0-9, 10-19, 20-39, over 40 years, respectively, and the risk increased 11% with each one-year decrease in age at exposure. The ERR for squamous cell carcinoma (SCC) in situ (n = 64) using a linear model was estimated as 0.71 (95% CI: 0.063, 1.9). However, there were no significant dose responses for malignant melanoma (n = 10), SCC (n = 114), Paget disease (n = 10) or other skin cancers (n = 15). The significant linear radiation risk for BCC with a threshold at 0.63 Gy suggested that the basal cells of the epidermis had a threshold sensitivity to ionizing radiation, especially for young persons at the time of exposure.
Effect of postprandial thermogenesis on the cutaneous vasodilatory response during exercise.
Hayashi, Keiji; Ito, Nozomi; Ichikawa, Yoko; Suzuki, Yuichi
2014-08-01
To examine the effect of postprandial thermogenesis on the cutaneous vasodilatory response, 10 healthy male subjects exercised for 30 min on a cycle ergometer at 50% of peak oxygen uptake, with and without food intake. Mean skin temperature, mean body temperature (Tb), heart rate, oxygen uptake, carbon dioxide elimination, and respiratory quotient were all significantly higher at baseline in the session with food intake than in the session without food intake. To evaluate the cutaneous vasodilatory response, relative laser Doppler flowmetry values were plotted against esophageal temperature (Tes) and Tb. Regression analysis revealed that the [Formula: see text] threshold for cutaneous vasodilation tended to be higher with food intake than without it, but there were no significant differences in the sensitivity. To clarify the effect of postprandial thermogenesis on the threshold for cutaneous vasodilation, the between-session difference in the Tes threshold and the Tb threshold were plotted against the between-session difference in baseline Tes and baseline Tb, respectively. Linear regression analysis of the resultant plot showed significant positive linear relationships (Tes: r = 0.85, P < 0.01; Tb: r = 0.67, P < 0.05). These results suggest that postprandial thermogenesis increases baseline body temperature, which raises the body temperature threshold for cutaneous vasodilation during exercise.
Topsakal, Vedat; Fransen, Erik; Schmerber, Sébastien; Declau, Frank; Yung, Matthew; Gordts, Frans; Van Camp, Guy; Van de Heyning, Paul
2006-09-01
To report the preoperative audiometric profile of surgically confirmed otosclerosis. Retrospective, multicenter study. Four tertiary referral centers. One thousand sixty-four surgically confirmed patients with otosclerosis. Therapeutic ear surgery for hearing improvement. Preoperative audiometric air conduction (AC) and bone conduction (BC) hearing thresholds were obtained retrospectively for 1064 patients with otosclerosis. A cross-sectional multiple linear regression analysis was performed on audiometric data of affected ears. Influences of age and sex were analyzed and age-related typical audiograms were created. Bone conduction thresholds were corrected for Carhart effect and presbyacusis; in addition, we tested to see if separate cochlear otosclerosis component existed. Corrected thresholds were than analyzed separately for progression of cochlear otosclerosis. The study population consisted of 35% men and 65% women (mean age, 44 yr). The mean pure-tone average at 0.5, 1, and 2 kHz was 57 dB hearing level. Multiple linear regression analysis showed significant progression for all measured AC and BC thresholds. The average annual threshold deterioration for AC was 0.45 dB/yr and the annual threshold deterioration for BC was 0.37 dB/yr. The average annual gap expansion was 0.08 dB/year. The corrected BC thresholds for Carhart effect and presbyacusis remained significantly different from zero, but only showed progression at 2 kHz. The preoperative audiological profile of otosclerosis is described. There is a significant sensorineural component in patients with otosclerosis planned for stapedotomy, which is worse than age-related hearing loss by itself. Deterioration rates of AC and BC thresholds have been reported, which can be helpful in clinical practice and might also guide the characterization of allegedly different phenotypes for familial and sporadic otosclerosis.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models
Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.
2016-01-01
IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818
Foster, Guy M.; Graham, Jennifer L.
2016-04-06
The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes in water-quality conditions through time, characterizing potentially harmful cyanobacterial events, and indicating changes in water-quality conditions that may affect drinking-water treatment processes.
NASA Astrophysics Data System (ADS)
Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.
2005-05-01
Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Liu, J.-R.; Luo, Y.; Yang, Y.; Tian, F.; Lei, K.-C.
2015-11-01
Groundwater in Beijing has been excessively exploited in a long time, causing the groundwater level continued to declining and land subsidence areas expanding, which restrained the economic and social sustainable development. Long years of study show good time-space corresponding relationship between groundwater level and land subsidence. To providing scientific basis for the following land subsidence prevention and treatment, quantitative research between groundwater level and settlement is necessary. Multi-linear regression models are set up by long series factual monitoring data about layered water table and settlement in the Tianzhu monitoring station. The results show that: layered settlement is closely related to water table, water level variation and amplitude, especially the water table. Finally, according to the threshold value in the land subsidence prevention and control plan of China (45, 30, 25 mm), the minimum allowable layered water level in this region while settlement achieving the threshold value is calculated between -18.448 and -10.082 m. The results provide a reasonable and operable control target of groundwater level for rational adjustment of groundwater exploited horizon in the future.
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir
2017-12-01
The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.
Feldthusen, Caroline; Grimby-Ekman, Anna; Forsblad-d'Elia, Helena; Jacobsson, Lennart; Mannerkorpi, Kaisa
2016-04-28
To investigate the impact of disease-related aspects on long-term variations in fatigue in persons with rheumatoid arthritis. Observational longitudinal study. Sixty-five persons with rheumatoid arthritis, age range 20-65 years, were invited to a clinical examination at 4 time-points during the 4 seasons. Outcome measures were: general fatigue rated on visual analogue scale (0-100) and aspects of fatigue assessed by the Bristol Rheumatoid Arthritis Fatigue Multidimensional Questionnaire. Disease-related variables were: disease activity (erythrocyte sedimentation rate), pain threshold (pressure algometer), physical capacity (six-minute walk test), pain (visual analogue scale (0-100)), depressive mood (Hospital Anxiety and Depression scale, depression subscale), personal factors (age, sex, body mass index) and season. Multivariable regression analysis, linear mixed effects models were applied. The strongest explanatory factors for all fatigue outcomes, when recorded at the same time-point as fatigue, were pain threshold and depressive mood. Self-reported pain was an explanatory factor for physical aspects of fatigue and body mass index contributed to explaining the consequences of fatigue on everyday living. For predicting later fatigue pain threshold and depressive mood were the strongest predictors. Pain threshold and depressive mood were the most important factors for fatigue in persons with rheumatoid arthritis.
Changing mothers' perception of infant emotion: a pilot study.
Carnegie, Rebecca; Shepherd, C; Pearson, R M; Button, K S; Munafò, M R; Evans, J; Penton-Voak, I S
2016-02-01
Cognitive bias modification (CBM) techniques, which experimentally retrain abnormal processing of affective stimuli, are becoming established for various psychiatric disorders. Such techniques have not yet been applied to maternal processing of infant emotion, which is affected by various psychiatric disorders. In a pilot study, mothers of children under 3 years old (n = 2) were recruited and randomly allocated to one of three training exercises, aiming either to increase or decrease their threshold of perceiving distress in a morphed continuum of 15 infant facial images. Differences between pre- and post-training threshold were analysed between and within subjects. Compared to baseline thresholds, the threshold for perceiving infant distress decreased in the lowered threshold group (mean difference -1.7 frames, 95 % confidence intervals (CI) -3.1 to -0.3, p = 0.02), increased in the raised threshold group (1.3 frames, 95 % CI 0.6 to 2.1, p < 0.01) and was unchanged in the control group (0.1 frames, 95 % CI -0.8 to 1.1, p = 0.80). Between-group differences were similarly robust in regression models and were not attenuated by potential confounders. The findings suggest that it is possible to change the threshold at which mothers perceive ambiguous infant faces as distressed, either to increase or decrease sensitivity to distress. This small study was intended to provide proof of concept (i.e. that it is possible to alter a mother's perception of infant distress). Questions remain as to whether the effects persist beyond the immediate experimental session, have an impact on maternal behaviour and could be used in clinical samples to improve maternal sensitivity and child outcomes.
Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen
2017-01-01
Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301
Assessment of the forecast skill of spring onset in the NMME experiment
NASA Astrophysics Data System (ADS)
Carrillo, C. M.; Ault, T.
2017-12-01
This study assesses the predictability of spring onset using an index of its interannual variability. We use the North American Multi-Model Ensemble (NMME) experiment to assess this predictability. The input dataset to compute spring onset index, SI-x, were treated with a daily joint bias correction (JBC) approach, and the SI-x outputs were post-processed using three ensemble model output statistic (EMOS) approaches—logistic regression, Gaussian Ensemble Dressing, and non-homogeneous Gaussian regression. These EMOS approaches quantify the effect of training period length and ensemble size on forecast skill. The highest range of predictability for the timing spring onset is from 10 to 60 days, and it is located along a narrow band between 35° to 45°N in the US. Using rank probability scores based on quantiles (q), a forecast threshold (q) of 0.5 provides a range of predictability that falls into two categories 10-40 and 40-60 days, which seems to represent the effect of the intra-seasonal scale. Using higher thresholds (q=0.6 and 0.7) predictability shows lower range with values around 10-30 days. The post-processing work using JBC improves the predictability skill by 13% from uncorrected results. Using EMOS, a significant positive change in the skill score is noted in regions where the skill with JBC shows evidence of improvement. The consensus of these techniques shows that regions of better predictability can be expanded.
Zhou, Ping; Schechter, Clyde; Cai, Ziyong; Markowitz, Morri
2011-06-01
To highlight complexities in defining vitamin D sufficiency in children. Serum 25-(OH) vitamin D [25(OH)D] levels from 140 healthy obese children age 6 to 21 years living in the inner city were compared with multiple health outcome measures, including bone biomarkers and cardiovascular risk factors. Several statistical analytic approaches were used, including Pearson correlation, analysis of covariance (ANCOVA), and "hockey stick" regression modeling. Potential threshold levels for vitamin D sufficiency varied by outcome variable and analytic approach. Only systolic blood pressure (SBP) was significantly correlated with 25(OH)D (r = -0.261; P = .038). ANCOVA revealed that SBP and triglyceride levels were statistically significant in the test groups [25(OH)D <10, <15 and <20 ng/mL] compared with the reference group [25(OH)D >25 ng/mL]. ANCOVA also showed that only children with severe vitamin D deficiency [25(OH)D <10 ng/mL] had significantly higher parathyroid hormone levels (Δ = 15; P = .0334). Hockey stick model regression analyses found evidence of a threshold level in SBP, with a 25(OH)D breakpoint of 27 ng/mL, along with a 25(OH)D breakpoint of 18 ng/mL for triglycerides, but no relationship between 25(OH)D and parathyroid hormone. Defining vitamin D sufficiency should take into account different vitamin D-related health outcome measures and analytic methodologies. Copyright © 2011 Mosby, Inc. All rights reserved.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Cockburn, Neil; Kovacs, Michael
2016-01-01
CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Cadence (steps/min) and intensity during ambulation in 6-20 year olds: the CADENCE-kids study.
Tudor-Locke, Catrine; Schuna, John M; Han, Ho; Aguiar, Elroy J; Larrivee, Sandra; Hsia, Daniel S; Ducharme, Scott W; Barreira, Tiago V; Johnson, William D
2018-02-26
Steps/day is widely utilized to estimate the total volume of ambulatory activity, but it does not directly reflect intensity, a central tenet of public health guidelines. Cadence (steps/min) represents an overlooked opportunity to describe the intensity of ambulatory activity. We sought to establish thresholds linking directly observed cadence with objectively measured intensity in 6-20 year olds. One hundred twenty participants completed multiple 5-min bouts on a treadmill, from 13.4 m/min (0.80 km/h) to 134.0 m/min (8.04 km/h). The protocol was terminated when participants naturally transitioned to running, or if they chose to not continue. Steps were visually counted and intensity was objectively measured using a portable metabolic system. Youth metabolic equivalents (METy) were calculated for 6-17 year olds, with moderate intensity defined as ≥4 and < 6 METy, and vigorous intensity as ≥6 METy. Traditional METs were calculated for 18-20 year olds, with moderate intensity defined as ≥3 and < 6 METs, and vigorous intensity defined as ≥6 METs. Optimal cadence thresholds for moderate and vigorous intensity were identified using segmented random coefficients models and receiver operating characteristic (ROC) curves. Participants were on average (± SD) aged 13.1 ± 4.3 years, weighed 55.8 ± 22.3 kg, and had a BMI z-score of 0.58 ± 1.21. Moderate intensity thresholds (from regression and ROC analyses) ranged from 128.4 steps/min among 6-8 year olds to 87.3 steps/min among 18-20 year olds. Comparable values for vigorous intensity ranged from 157.7 steps/min among 6-8 year olds to 119.3 steps/min among 18-20 year olds. Considering both regression and ROC approaches, heuristic cadence thresholds (i.e., evidence-based, practical, rounded) ranged from 125 to 90 steps/min for moderate intensity, and 155 to 125 steps/min for vigorous intensity, with higher cadences for younger age groups. Sensitivities and specificities for these heuristic thresholds ranged from 77.8 to 99.0%, indicating fair to excellent classification accuracy. These heuristic cadence thresholds may be used to prescribe physical activity intensity in public health recommendations. In the research and clinical context, these heuristic cadence thresholds have apparent value for accelerometer-based analytical approaches to determine the intensity of ambulatory activity.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Veazey, Lindsay M; Franklin, Erik C; Kelley, Christopher; Rooney, John; Frazer, L Neil; Toonen, Robert J
2016-01-01
Predictive habitat suitability models are powerful tools for cost-effective, statistically robust assessment of the environmental drivers of species distributions. The aim of this study was to develop predictive habitat suitability models for two genera of scleractinian corals (Leptoserisand Montipora) found within the mesophotic zone across the main Hawaiian Islands. The mesophotic zone (30-180 m) is challenging to reach, and therefore historically understudied, because it falls between the maximum limit of SCUBA divers and the minimum typical working depth of submersible vehicles. Here, we implement a logistic regression with rare events corrections to account for the scarcity of presence observations within the dataset. These corrections reduced the coefficient error and improved overall prediction success (73.6% and 74.3%) for both original regression models. The final models included depth, rugosity, slope, mean current velocity, and wave height as the best environmental covariates for predicting the occurrence of the two genera in the mesophotic zone. Using an objectively selected theta ("presence") threshold, the predicted presence probability values (average of 0.051 for Leptoseris and 0.040 for Montipora) were translated to spatially-explicit habitat suitability maps of the main Hawaiian Islands at 25 m grid cell resolution. Our maps are the first of their kind to use extant presence and absence data to examine the habitat preferences of these two dominant mesophotic coral genera across Hawai'i.
Ivanescu, Andrada E; Martin, Corby K; Heymsfield, Steven B; Marshall, Kaitlyn; Bodrato, Victoria E; Williamson, Donald A; Anton, Stephen D; Sacks, Frank M; Ryan, Donna; Bray, George A
2015-01-01
Background: Currently, early weight-loss predictions of long-term weight-loss success rely on fixed percent-weight-loss thresholds. Objective: The objective was to develop thresholds during the first 3 mo of intervention that include the influence of age, sex, baseline weight, percent weight loss, and deviations from expected weight to predict whether a participant is likely to lose 5% or more body weight by year 1. Design: Data consisting of month 1, 2, 3, and 12 treatment weights were obtained from the 2-y Preventing Obesity Using Novel Dietary Strategies (POUNDS Lost) intervention. Logistic regression models that included covariates of age, height, sex, baseline weight, target energy intake, percent weight loss, and deviation of actual weight from expected were developed for months 1, 2, and 3 that predicted the probability of losing <5% of body weight in 1 y. Receiver operating characteristic (ROC) curves, area under the curve (AUC), and thresholds were calculated for each model. The AUC statistic quantified the ROC curve’s capacity to classify participants likely to lose <5% of their body weight at the end of 1 y. The models yielding the highest AUC were retained as optimal. For comparison with current practice, ROC curves relying solely on percent weight loss were also calculated. Results: Optimal models for months 1, 2, and 3 yielded ROC curves with AUCs of 0.68 (95% CI: 0.63, 0.74), 0.75 (95% CI: 0.71, 0.81), and 0.79 (95% CI: 0.74, 0.84), respectively. Percent weight loss alone was not better at identifying true positives than random chance (AUC ≤0.50). Conclusions: The newly derived models provide a personalized prediction of long-term success from early weight-loss variables. The predictions improve on existing fixed percent-weight-loss thresholds. Future research is needed to explore model application for informing treatment approaches during early intervention. The POUNDS Lost study was registered at clinicaltrials.gov as NCT00072995. PMID:25733628
Betts, M.G.; Hagar, J.C.; Rivers, J.W.; Alexander, J.D.; McGarigal, K.; McComb, B.C.
2010-01-01
Recent declines in broadleaf-dominated, early-seral forest globally as a function of intensive forest management and/or fire suppression have raised concern about the viability of populations dependent on such forest types. However, quantitative information about the strength and direction of species associations with broadleaf cover at landscape scales are rare. Uncovering such habitat relationships is essential for understanding the demography of species and in developing sound conservation strategies. It is particularly important to detect points in habitat reduction where rates of population decline may accelerate or the likelihood of species occurrence drops rapidly (i.e., thresholds). Here, we use a large avian point-count data set (N = 4375) from southwestern and northwestern Oregon along with segmented logistic regression to test for thresholds in forest bird occurrence as a function of broadleaf forest and early-seral broadleaf forest at local (150-m radius) and landscape (500–2000-m radius) scales. All 12 bird species examined showed positive responses to either broadleaf forest in general, and/or early-seral broadleaf forest. However, regional variation in species response to these conditions was high. We found considerable evidence for landscape thresholds in bird species occurrence as a function of broadleaf cover; threshold models received substantially greater support than linear models for eight of 12 species. Landscape thresholds in broadleaf forest ranged broadly from 1.35% to 24.55% mean canopy cover. Early-seral broadleaf thresholds tended to be much lower (0.22–1.87%). We found a strong negative relationship between the strength of species association with early-seral broadleaf forest and 42-year bird population trends; species most associated with this forest type have declined at the greatest rates. Taken together, these results provide the first support for the hypothesis that reductions in broadleaf-dominated early-seral forest due to succession and intensive forest management have led to population declines of constituent species in the Pacific northwestern United States. Forest management treatments that maintain or restore even small amounts of broadleaf vegetation could mitigate further declines.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
Supra-threshold epidermis injury from near-infrared laser radiation prior to ablation onset
NASA Astrophysics Data System (ADS)
DeLisi, Michael P.; Peterson, Amanda M.; Lile, Lily A.; Noojin, Gary D.; Shingledecker, Aurora D.; Stolarski, David J.; Zohner, Justin J.; Kumru, Semih S.; Thomas, Robert J.
2017-02-01
With continued advancement of solid-state laser technology, high-energy lasers operating in the near-infrared (NIR) band are being applied in an increasing number of manufacturing techniques and medical treatments. Safety-related investigations of potentially harmful laser interaction with skin are commonplace, consisting of establishing the maximum permissible exposure (MPE) thresholds under various conditions, often utilizing the minimally-visible lesion (MVL) metric as an indication of damage. Likewise, characterization of ablation onset and velocity is of interest for therapeutic and surgical use, and concerns exceptionally high irradiance levels. However, skin injury response between these two exposure ranges is not well understood. This study utilized a 1070-nm Yb-doped, diode-pumped fiber laser to explore the response of excised porcine skin tissue to high-energy exposures within the supra-threshold injury region without inducing ablation. Concurrent high-speed videography was employed to assess the effect on the epidermis, with a dichotomous response determination given for three progressive damage event categories: observable permanent distortion on the surface, formation of an epidermal bubble due to bounded intra-cutaneous water vaporization, and rupture of said bubble during laser exposure. ED50 values were calculated for these categories under various pulse configurations and beam diameters, and logistic regression models predicted injury events with approximately 90% accuracy. The distinction of skin response into categories of increasing degrees of damage expands the current understanding of high-energy laser safety while also underlining the unique biophysical effects during induced water phase change in tissue. These observations could prove useful in augmenting biothermomechanical models of laser exposure in the supra-threshold region.
Linear regression models for solvent accessibility prediction in proteins.
Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław
2005-04-01
The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple and computationally much more efficient linear SVR performs comparably to nonlinear models and thus can be used in order to facilitate further attempts to design more accurate RSA prediction methods, with applications to fold recognition and de novo protein structure prediction methods.
Anderson, Kevin L; Thomas, Samantha M; Adam, Mohamed A; Pontius, Lauren N; Stang, Michael T; Scheri, Randall P; Roman, Sanziana A; Sosa, Julie A
2018-01-01
An association has been suggested between increasing surgeon volume and improved patient outcomes, but a threshold has not been defined for what constitutes a "high-volume" adrenal surgeon. Adult patients who underwent adrenalectomy by an identifiable surgeon between 1998-2009 were selected from the Healthcare Cost and Utilization Project National Inpatient Sample. Logistic regression modeling with restricted cubic splines was utilized to estimate the association between annual surgeon volume and complication rates in order to identify a volume threshold. A total of 3,496 surgeons performed adrenalectomies on 6,712 patients; median annual surgeon volume was 1 case. After adjustment, the likelihood of experiencing a complication decreased with increasing annual surgeon volume up to 5.6 cases (95% confidence interval, 3.27-5.96). After adjustment, patients undergoing resection by low-volume surgeons (<6 cases/year) were more likely to experience complications (odds ratio 1.71, 95% confidence interval, 1.27-2.31, P = .005), have a greater hospital stay (relative risk 1.46, 95% confidence interval, 1.25-1.70, P = .003), and at increased cost (+26.2%, 95% confidence interval, 12.6-39.9, P = .02). This study suggests that an annual threshold of surgeon volume (≥6 cases/year) that is associated with improved patient outcomes and decreased hospital cost. This volume threshold has implications for quality improvement, surgical referral and reimbursement, and surgical training. Copyright © 2017 Elsevier Inc. All rights reserved.
Hargrave, Catriona; Deegan, Timothy; Poulsen, Michael; Bednarz, Tomasz; Harden, Fiona; Mengersen, Kerrie
2018-05-17
To develop a method for scoring online cone-beam CT (CBCT)-to-planning CT image feature alignment to inform prostate image-guided radiotherapy (IGRT) decision-making. The feasibility of incorporating volume variation metric thresholds predictive of delivering planned dose into weighted functions, was investigated. Radiation therapists and radiation oncologists participated in workshops where they reviewed prostate CBCT-IGRT case examples and completed a paper-based survey of image feature matching practices. For 36 prostate cancer patients, one daily CBCT was retrospectively contoured then registered with their plan to simulate delivered dose if (a) no online setup corrections and (b) online image alignment and setup corrections, were performed. Survey results were used to select variables for inclusion in classification and regression tree (CART) and boosted regression trees (BRT) modeling of volume variation metric thresholds predictive of delivering planned dose to the prostate, proximal seminal vesicles (PSV), bladder, and rectum. Weighted functions incorporating the CART and BRT results were used to calculate a score of individual tumor and organ at risk image feature alignment (FAS TV _ OAR ). Scaled and weighted FAS TV _ OAR were then used to calculate a score of overall treatment compliance (FAS global ) for a given CBCT-planning CT registration. The FAS TV _ OAR were assessed for sensitivity, specificity, and predictive power. FAS global thresholds indicative of high, medium, or low overall treatment plan compliance were determined using coefficients from multiple linear regression analysis. Thirty-two participants completed the prostate CBCT-IGRT survey. While responses demonstrated consensus of practice for preferential ranking of planning CT and CBCT match features in the presence of deformation and rotation, variation existed in the specified thresholds for observed volume differences requiring patient repositioning or repeat bladder and bowel preparation. The CART and BRT modeling indicated that for a given registration, a Dice similarity coefficient >0.80 and >0.60 for the prostate and PSV, respectively, and a maximum Hausdorff distance <8.0 mm for both structures were predictive of delivered dose ± 5% of planned dose. A normalized volume difference <1.0 and a CBCT anterior rectum wall >1.0 mm anterior to the planning CT anterior rectum wall were predictive of delivered dose >5% of planned rectum dose. A normalized volume difference <0.88, and a CBCT bladder wall >13.5 mm inferior and >5.0 mm posterior to the planning CT bladder were predictive of delivered dose >5% of planned bladder dose. A FAS TV _ OAR >0 is indicative of delivery of planned dose. For calculated FAS TV _ OAR for the prostate, PSV, bladder, and rectum using test data, sensitivity was 0.56, 0.75, 0.89, and 1.00, respectively; specificity 0.90, 0.94, 0.59, and 1.00, respectively; positive predictive power 0.90, 0.86, 0.53, and 1.00, respectively; and negative predictive power 0.56, 0.89, 0.91, and 1.00, respectively. Thresholds for the calculated FAS global of were low <60, medium 60-80, and high >80, with a 27% misclassification rate for the test data. A FAS global incorporating nested FAS TV _ OAR and volume variation metric thresholds predictive of treatment plan compliance was developed, offering an alternative to pretreatment dose calculations to assess treatment delivery accuracy. © 2018 American Association of Physicists in Medicine.
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.
Eikenberry, Steffen E; Marmarelis, Vasilis Z
2013-02-01
We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.
Probabilistic forecasting for extreme NO2 pollution episodes.
Aznarte, José L
2017-10-01
In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee
2008-07-01
In a false killer whale Pseudorca crassidens, echo perception thresholds were measured using a go/no-go psychophysical paradigm and one-up-one-down staircase procedure. Computer controlled echoes were electronically synthesized pulses that were played back through a transducer and triggered by whale emitted biosonar pulses. The echo amplitudes were proportional to biosonar pulse amplitudes; echo levels were specified in terms of the attenuation of the echo sound pressure level near the animal's head relative to the source level of the biosonar pulses. With increasing echo delay, the thresholds (echo attenuation factor) decreased from -49.3 dB at 2 ms to -79.5 dB at 16 ms, with a regression slope of -9.5 dB per delay doubling (-31.5 dB per delay decade). At the longer delays, the threshold remained nearly constant around -80.4 dB. Levels of emitted pulses slightly increased with delay prolongation (threshold decrease), with a regression slope of 3.2 dB per delay doubling (10.7 dB per delay decade). The echo threshold dependence on delay is interpreted as a release from forward masking by the preceding emitted pulse. This release may compensate for the echo level decrease with distance, thus keeping the echo sensation level for the animal near constant within a certain distance range.
Silva, Roberto Ribeiro da; Reis, Michel Silva; Pereira, Basílio de Bragança; Nascimento, Emilia Matos do; Pedrosa, Roberto Coury
2017-12-01
Anaerobic threshold (AT) is recognized as objective and direct measurement that reflects variations in metabolism of skeletal muscles during exercise. Its prognostic value in heart diseases of non-chagasic etiology is well established. However, the assessment of risk of death in Chagas heart disease is relatively well established by Rassi score. But, the added value that AT can bring to Rassi score has not been studied yet. To assess whether AT presents additional effect to Rassi score in patients with chronic Chagas' heart disease. Prospective research of dynamic cohort by review of 150 medical records of patients. Were selected for cohort 45 medical records of patients who underwent cardiopulmonary exercise testing between 1996-1997 and followed until September 2015. Data analysis to detect association between studied variables can be seen using a logistic regression model. The suitability of the models was verified using ROC curves and the coefficient of determination R 2 . 8 patients (17.78%) died by September 2015, with 7 of them (87.5%) from cardiovascular causes, of which 4 (57.14%) were considered on high risk by Rassi score. With Rassi score as independent variable, and death being the outcome, we obtained an area under the curve (AUC)=0.711, with R 2 =0.214. Instituting AT as independent variable, we found AUC=0.706, with R 2 =0.078. When we define Rassi score and AT as independent variables, it was obtained AUC=0.800 and R 2 =0.263. when AT is included in logistic regression, it increases by 5% the explanation (R 2 ) to the death estimation. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Ademi, Zanfina; Pfeil, Alena M; Hancock, Elizabeth; Trueman, David; Haroun, Rola Haroun; Deschaseaux, Celine; Schwenkglenks, Matthias
2017-11-29
We aimed to assess the cost effectiveness of sacubitril/valsartan compared to angiotensin-converting enzyme inhibitors (ACEIs) for the treatment of individuals with chronic heart failure and reduced-ejection fraction (HFrEF) from the perspective of the Swiss health care system. The cost-effectiveness analysis was implemented as a lifelong regression-based cohort model. We compared sacubitril/valsartan with enalapril in chronic heart failure patients with HFrEF and New York-Heart Association Functional Classification II-IV symptoms. Regression models based on the randomised clinical phase III PARADIGM-HF trials were used to predict events (all-cause mortality, hospitalisations, adverse events and quality of life) for each treatment strategy modelled over the lifetime horizon, with adjustments for patient characteristics. Unit costs were obtained from Swiss public sources for the year 2014, and costs and effects were discounted by 3%. The main outcome of interest was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life years (QALYs) gained. Deterministic sensitivity analysis (DSA) and scenario and probabilistic sensitivity analysis (PSA) were performed. In the base-case analysis, the sacubitril/valsartan strategy showed a decrease in the number of hospitalisations (6.0% per year absolute reduction) and lifetime hospital costs by 8.0% (discounted) when compared with enalapril. Sacubitril/valsartan was predicted to improve overall and quality-adjusted survival by 0.50 years and 0.42 QALYs, respectively. Additional net-total costs were CHF 10 926. This led to an ICER of CHF 25 684. In PSA, the probability of sacubitril/valsartan being cost-effective at thresholds of CHF 50 000 was 99.0%. The treatment of HFrEF patients with sacubitril/valsartan versus enalapril is cost effective, if a willingness-to-pay threshold of CHF 50 000 per QALY gained ratio is assumed.
Establishing endangered species recovery criteria using predictive simulation modeling
McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol
2014-01-01
Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.
Bai, Xiaohui; Zhi, Xinghua; Zhu, Huifeng; Meng, Mingqun; Zhang, Mingde
2015-01-01
This study investigates the effect of chloramine residual on bacteria growth and regrowth and the relationship between heterotrophic plate counts (HPCs) and the concentration of chloramine residual in the Shanghai drinking water distribution system (DWDS). In this study, models to control HPCs in the water distribution system and consumer taps are also developed. Real-time ArcGIS was applied to show the distribution and changed results of the chloramine residual concentration in the pipe system by using these models. Residual regression analysis was used to get a reasonable range of the threshold values that allows the chloramine residual to efficiently inhibit bacteria growth in the Shanghai DWDS; the threshold values should be between 0.45 and 0.5 mg/L in pipe water and 0.2 and 0.25 mg/L in tap water. The low residual chloramine value (0.05 mg/L) of the Chinese drinking water quality standard may pose a potential health risk for microorganisms that should be improved. Disinfection by-products (DBPs) were detected, but no health risk was identified.
Fridman, M; Hodgkins, P S; Kahle, J S; Erder, M H
2015-06-01
There are few approved therapies for adults with attention-deficit/hyperactivity disorder (ADHD) in Europe. Lisdexamfetamine (LDX) is an effective treatment for ADHD; however, no clinical trials examining the efficacy of LDX specifically in European adults have been conducted. Therefore, to estimate the efficacy of LDX in European adults we performed a meta-regression of existing clinical data. A systematic review identified US- and Europe-based randomized efficacy trials of LDX, atomoxetine (ATX), or osmotic-release oral system methylphenidate (OROS-MPH) in children/adolescents and adults. A meta-regression model was then fitted to the published/calculated effect sizes (Cohen's d) using medication, geographical location, and age group as predictors. The LDX effect size in European adults was extrapolated from the fitted model. Sensitivity analyses performed included using adult-only studies and adding studies with placebo designs other than a standard pill-placebo design. Twenty-two of 2832 identified articles met inclusion criteria. The model-estimated effect size of LDX for European adults was 1.070 (95% confidence interval: 0.738, 1.401), larger than the 0.8 threshold for large effect sizes. The overall model fit was adequate (80%) and stable in the sensitivity analyses. This model predicts that LDX may have a large treatment effect size in European adults with ADHD. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh Pradhan, Ananta Man; Kang, Hyo-Sub; Kim, Yun-Tae
2016-04-01
This study uses a physically based approach to evaluate the factor of safety of the hillslope for different hydrological conditions, in Mt Umyeon, south of Seoul. The hydrological conditions were determined using intensity and duration of whole Korea of known landslide inventory data. Quantile regression statistical method was used to ascertain different probability warning levels on the basis of rainfall thresholds. Physically based models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical probabilistic methods can include other causative factors which influence the slope stability such as forest, soil and geology, but rely on good landslide inventories of the site. In this study a hybrid approach has described that combines the physically-based landslide susceptibility for different hydrological conditions. A presence-only based maximum entropy model was used to hybrid and analyze relation of landslide with conditioning factors. About 80% of the landslides were listed among the unstable sites identified in the proposed model, thereby presenting its effectiveness and accuracy in determining unstable areas and areas that require evacuation. These cumulative rainfall thresholds provide a valuable reference to guide disaster prevention authorities in the issuance of warning levels with the ability to reduce losses and save lives.
Using Baidu Search Index to Predict Dengue Outbreak in China
NASA Astrophysics Data System (ADS)
Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai
2016-12-01
This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.
Ferrari, Giulia; Agnew-Davies, Roxane; Bailey, Jayne; Howard, Louise; Howarth, Emma; Peters, Tim J; Sardinha, Lynnmarie; Feder, Gene
2014-01-01
Domestic violence and abuse (DVA) are associated with an increased risk of mental illness, but we know little about the mental health of female DVA survivors seeking support from domestic violence services. Domestic violence and abuse (DVA) are associated with an increased risk of mental illness, but we know little about the mental health of female DVA survivors seeking support from domestic violence services. Baseline data on 260 women enrolled in a randomized controlled trial of a psychological intervention for DVA survivors was analyzed. We report prevalence of and associations between mental health status and severity of abuse at the time of recruitment. We used logistic and normal regression models for binary and continuous outcomes, respectively. Mental health measures used were: Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM), Patient Health Questionnaire, Generalized Anxiety Disorder Assessment, and the Posttraumatic Diagnostic Scale (PDS) to measure posttraumatic stress disorder. The Composite Abuse Scale (CAS) measured abuse. Exposure to DVA was high, with a mean CAS score of 56 (SD 34). The mean CORE-OM score was 18 (SD 8) with 76% above the clinical threshold (95% confidence interval: 70-81%). Depression and anxiety levels were high, with means close to clinical thresholds, and all respondents recorded PTSD scores above the clinical threshold. Symptoms of mental illness increased stepwise with increasing severity of DVA. Exposure to DVA was high, with a mean CAS score of 56 (SD 34). The mean CORE-OM score was 18 (SD 8) with 76% above the clinical threshold (95% confidence interval: 70-81%). Depression and anxiety levels were high, with means close to clinical thresholds, and all respondents recorded PTSD scores above the clinical threshold. Symptoms of mental illness increased stepwise with increasing severity of DVA.
Seizure threshold increases can be predicted by EEG quality in right unilateral ultrabrief ECT.
Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Waite, Susan; Loo, Colleen K
2017-12-01
Increases in seizure threshold (ST) over a course of brief pulse ECT can be predicted by decreases in EEG quality, informing ECT dose adjustment to maintain adequate supra-threshold dosing. ST increases also occur over a course of right unilateral ultrabrief (RUL UB) ECT, but no data exist on the relationship between ST increases and EEG indices. This study (n = 35) investigated if increases in ST over RUL UB ECT treatments could be predicted by a decline in seizure quality. ST titration was performed at ECT session one and seven, with treatment dosing maintained stable (at 6-8 times ST) in intervening sessions. Seizure quality indices (slow-wave onset, mid-ictal amplitude, regularity, stereotypy, and post-ictal suppression) were manually rated at the first supra-threshold treatment, and last supra-threshold treatment before re-titration, using a structured rating scale, by a single trained rater blinded to the ECT session being rated. Twenty-one subjects (60%) had a ST increase. The association between ST changes and EEG quality indices was analysed by logistic regression, yielding a significant model (p < 0.001). Initial ST (p < 0.05) and percentage change in mid-ictal amplitude (p < 0.05) were significant predictors of change in ST. Percentage change in post-ictal suppression reached trend level significance (p = 0.065). Increases in ST over a RUL UB ECT course may be predicted by decreases in seizure quality, specifically decline in mid-ictal amplitude and potentially in post-ictal suppression. Such EEG indices may be able to inform when dose adjustments are necessary to maintain adequate supra-threshold dosing in RUL UB ECT.
Low Oxygen Delivery as a Predictor of Acute Kidney Injury during Cardiopulmonary Bypass.
Newland, Richard F; Baker, Robert A
2017-12-01
Low indexed oxygen delivery (DO 2 i) during cardiopulmonary bypass (CPB) has been associated with an increase in the likelihood of acute kidney injury (AKI), with critical thresholds for oxygen delivery reported to be 260-270 mL/min/m 2 . This study aims to explore whether a relationship exists for oxygen delivery during CPB, in which the integral of amount and time below a critical threshold, is associated with the incidence of postoperative AKI. The area under the curve (AUC) with DO 2 i during CPB above or below 270 mL/min/m 2 was calculated as a metric of oxygen delivery in 210 patients undergoing CPB. To determine the influence of low oxygen delivery on AKI, a multivariate logistic regression model was developed including AUC < 0, Euroscore II to provide preoperative risk factor adjustment, and incidence of red blood cell transfusion to adjust for the influence of transfusion. Having an AUC < 0 for an oxygen delivery threshold of 270 mL/min/m 2 during CPB was an independent predictor of AKI, after adjustment for Euroscore II and transfusion [OR 2.74, CI {1.01-7.41}, p = .047]. These results support that a relationship exists for oxygen delivery during CPB, in which the integral of amount and time below a critical threshold is associated with the incidence of postoperative AKI.
An increase in visceral fat is associated with a decrease in the taste and olfactory capacity
Fernandez-Garcia, Jose Carlos; Alcaide, Juan; Santiago-Fernandez, Concepcion; Roca-Rodriguez, MM.; Aguera, Zaida; Baños, Rosa; Botella, Cristina; de la Torre, Rafael; Fernandez-Real, Jose M.; Fruhbeck, Gema; Gomez-Ambrosi, Javier; Jimenez-Murcia, Susana; Menchon, Jose M.; Casanueva, Felipe F.; Fernandez-Aranda, Fernando; Tinahones, Francisco J.; Garrido-Sanchez, Lourdes
2017-01-01
Introduction Sensory factors may play an important role in the determination of appetite and food choices. Also, some adipokines may alter or predict the perception and pleasantness of specific odors. We aimed to analyze differences in smell–taste capacity between females with different weights and relate them with fat and fat-free mass, visceral fat, and several adipokines. Materials and methods 179 females with different weights (from low weight to morbid obesity) were studied. We analyzed the relation between fat, fat-free mass, visceral fat (indirectly estimated by bioelectrical impedance analysis with visceral fat rating (VFR)), leptin, adiponectin and visfatin. The smell and taste assessments were performed through the "Sniffin’ Sticks" and "Taste Strips" respectively. Results We found a lower score in the measurement of smell (TDI-score (Threshold, Discrimination and Identification)) in obese subjects. All the olfactory functions measured, such as threshold, discrimination, identification and the TDI-score, correlated negatively with age, body mass index (BMI), leptin, fat mass, fat-free mass and VFR. In a multiple linear regression model, VFR mainly predicted the TDI-score. With regard to the taste function measurements, the normal weight subjects showed a higher score of taste functions. However a tendency to decrease was observed in the groups with greater or lesser BMI. In a multiple linear regression model VFR and age mainly predicted the total taste scores. Discussion We show for the first time that a reverse relationship exists between visceral fat and sensory signals, such as smell and taste, across a population with different body weight conditions. PMID:28158237
ERIC Educational Resources Information Center
Shobo, Yetty; Wong, Jen D.; Bell, Angie
2014-01-01
Regression discontinuity (RD), an "as good as randomized," research design is increasingly prominent in education research in recent years; the design gets eligible quasi-experimental designs as close as possible to experimental designs by using a stated threshold on a continuous baseline variable to assign individuals to a…
Regression Discontinuity Designs: A Guide to Practice. NBER Working Paper No. 13039
ERIC Educational Resources Information Center
Imbens, Guido; Lemieux, Thomas
2007-01-01
In Regression Discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell (1960). With the exception of…
ERIC Educational Resources Information Center
Lauen, Douglas Lee
2011-01-01
This study examines the incentive effects of North Carolina's practice of awarding performance bonuses on test score achievement on the state tests. Bonuses were awarded based solely on whether a school exceeds a threshold on a continuous performance metric. The study uses a sharp regression discontinuity design, an approach with strong internal…
Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping
2013-01-01
Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach. Findings from this proof-of-concept study suggest that our approach has a great potential in providing a novel and sensitive tool for threshold setting in chemical risk assessment. In future work, we plan to analyze more time-series datasets with a full spectrum of concentrations and sufficient replications per treatment. The pathway alteration-derived thresholds will also be compared with those derived from apical endpoints such as cell growth rate.
Uddin, Zakir; MacDermid, Joy C.; Moro, Jaydeep; Galea, Victoria; Gross, Anita R.
2016-01-01
Objective: To estimate the extent to which psychophysical quantitative sensory test (QST) and patient factors (gender, age and comorbidity) predict pain, function and health status in people with shoulder disorders. To determine if there are gender differences for QST measures in current perception threshold (CPT), vibration threshold (VT) and pressure pain (PP) threshold and tolerance. Design: A cross-sectional study design. Setting: MacHAND Clinical Research Lab at McMaster University. Subjects: 34 surgical and 10 nonsurgical participants with shoulder pain were recruited. Method: Participants completed the following patient reported outcomes: pain (Numeric Pain Rating, Pain Catastrophizing Scale, Shoulder Pain and Disability Index) and health status (Short Form-12). Participants completed QST at 4 standardized locations and then an upper extremity performance-based endurance test (FIT-HaNSA). Pearson r’s were computed to determine the relationships between QST variables and patient factors with either pain, function or health status. Eight regression models were built to analysis QST’s and patient factors separately as predictors of either pain, function or health status. An independent sample t-test was done to evaluate the gender effect on QST. Results: Greater PP threshold and PP tolerance was significantly correlated with higher shoulder functional performance on the FIT-HANSA (r =0.31-0.44) and lower self-reported shoulder disability (r = -0.32 to -0.36). Higher comorbidity was consistently correlated (r =0.31-0.46) with more pain, and less function and health status. Older age was correlated to more pain intensity and less function (r =0.31-0.57). In multivariate models, patient factors contributed significantly to pain, function or health status models (r2 =0.19-0.36); whereas QST did not. QST was significantly different between males and females [in PP threshold (3.9 vs. 6.2, p < .001) and PP tolerance (7.6 vs. 2.6, p < .001) and CPT (1.6 vs. 2.3, p =.02)]. Conclusion: Psychophysical dimensions and patient factors (gender, age and comorbidity) affect self-reported and performance-based outcome measures in people with shoulder disorders. PMID:29399220
Liang, C Jason; Budoff, Matthew J; Kaufman, Joel D; Kronmal, Richard A; Brown, Elizabeth R
2012-07-02
Extent of atherosclerosis measured by amount of coronary artery calcium (CAC) in computed tomography (CT) has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS). These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS) for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA). Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26) and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179). Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224). The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
Mathers, Jonathan; Sitch, Alice; Parry, Jayne
2016-10-01
Medical schools are increasingly using novel tools to select applicants. The UK Clinical Aptitude Test (UKCAT) is one such tool and measures mental abilities, attitudes and professional behaviour conducive to being a doctor using constructs likely to be less affected by socio-demographic factors than traditional measures of potential. Universities are free to use UKCAT as they see fit but three broad modalities have been observed: 'borderline', 'factor' and 'threshold'. This paper aims to provide the first longitudinal analyses assessing the impact of the different uses of UKCAT on making offers to applicants with different socio-demographic characteristics. Multilevel regression was used to model the outcome of applications to UK medical schools during the period 2004-2011 (data obtained from UCAS), adjusted for sex, ethnicity, schooling, parental occupation, educational attainment, year of application and UKCAT use (borderline, factor and threshold). The three ways of using the UKCAT did not differ in their impact on making the selection process more equitable, other than a marked reversal for female advantage when applied in a 'threshold' manner. Our attempt to model the longitudinal impact of the use of the UKCAT in its threshold format found again the reversal of female advantage, but did not demonstrate similar statistically significant reductions of the advantages associated with White ethnicity, higher social class and selective schooling. Our findings demonstrate attenuation of the advantage of being female but no changes in admission rates based on White ethnicity, higher social class and selective schooling. In view of this, the utility of the UKCAT as a means to widen access to medical schools among non-White and less advantaged applicants remains unproven. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Zhang, Zhen; Xie, Xu; Chen, Xiliang; Li, Yuan; Lu, Yan; Mei, Shujiang; Liao, Yuxue; Lin, Hualiang
2016-01-01
Various meteorological factors have been associated with hand, foot and mouth disease (HFMD) among children; however, fewer studies have examined the non-linearity and interaction among the meteorological factors. A generalized additive model with a log link allowing Poisson auto-regression and over-dispersion was applied to investigate the short-term effects daily meteorological factors on children HFMD with adjustment of potential confounding factors. We found positive effects of mean temperature and wind speed, the excess relative risk (ERR) was 2.75% (95% CI: 1.98%, 3.53%) for one degree increase in daily mean temperature on lag day 6, and 3.93% (95% CI: 2.16% to 5.73%) for 1m/s increase in wind speed on lag day 3. We found a non-linear effect of relative humidity with thresholds with the low threshold at 45% and high threshold at 85%, within which there was positive effect, the ERR was 1.06% (95% CI: 0.85% to 1.27%) for 1 percent increase in relative humidity on lag day 5. No significant effect was observed for rainfall and sunshine duration. For the interactive effects, we found a weak additive interaction between mean temperature and relative humidity, and slightly antagonistic interaction between mean temperature and wind speed, and between relative humidity and wind speed in the additive models, but the interactions were not statistically significant. This study suggests that mean temperature, relative humidity and wind speed might be risk factors of children HFMD in Shenzhen, and the interaction analysis indicates that these meteorological factors might have played their roles individually. Copyright © 2015 Elsevier B.V. All rights reserved.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Brügemann, K; Gernand, E; von Borstel, U U; König, S
2011-08-01
Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Häberle, Lothar; Hack, Carolin C; Heusinger, Katharina; Wagner, Florian; Jud, Sebastian M; Uder, Michael; Beckmann, Matthias W; Schulz-Wendtland, Rüdiger; Wittenberg, Thomas; Fasching, Peter A
2017-08-30
Tumors in radiologically dense breast were overlooked on mammograms more often than tumors in low-density breasts. A fast reproducible and automated method of assessing percentage mammographic density (PMD) would be desirable to support decisions whether ultrasonography should be provided for women in addition to mammography in diagnostic mammography units. PMD assessment has still not been included in clinical routine work, as there are issues of interobserver variability and the procedure is quite time consuming. This study investigated whether fully automatically generated texture features of mammograms can replace time-consuming semi-automatic PMD assessment to predict a patient's risk of having an invasive breast tumor that is visible on ultrasound but masked on mammography (mammography failure). This observational study included 1334 women with invasive breast cancer treated at a hospital-based diagnostic mammography unit. Ultrasound was available for the entire cohort as part of routine diagnosis. Computer-based threshold PMD assessments ("observed PMD") were carried out and 363 texture features were obtained from each mammogram. Several variable selection and regression techniques (univariate selection, lasso, boosting, random forest) were applied to predict PMD from the texture features. The predicted PMD values were each used as new predictor for masking in logistic regression models together with clinical predictors. These four logistic regression models with predicted PMD were compared among themselves and with a logistic regression model with observed PMD. The most accurate masking prediction was determined by cross-validation. About 120 of the 363 texture features were selected for predicting PMD. Density predictions with boosting were the best substitute for observed PMD to predict masking. Overall, the corresponding logistic regression model performed better (cross-validated AUC, 0.747) than one without mammographic density (0.734), but less well than the one with the observed PMD (0.753). However, in patients with an assigned mammography failure risk >10%, covering about half of all masked tumors, the boosting-based model performed at least as accurately as the original PMD model. Automatically generated texture features can replace semi-automatically determined PMD in a prediction model for mammography failure, such that more than 50% of masked tumors could be discovered.
Factors related to the joint probability of flooding on paired streams
Koltun, G.F.; Sherwood, J.M.
1998-01-01
The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, S. A., E-mail: tsa@miee.ru; Savelyev, M. S.; Podgaetsky, V. M.
A threshold model is described which permits one to determine the properties of limiters for high-powered laser light. It takes into account the threshold characteristics of the nonlinear optical interaction between the laser beam and the limiter working material. The traditional non-threshold model is a particular case of the threshold model when the limiting threshold is zero. The nonlinear characteristics of carbon nanotubes in liquid and solid media are obtained from experimental Z-scan data. Specifically, the nonlinear threshold effect was observed for aqueous dispersions of nanotubes, but not for nanotubes in solid polymethylmethacrylate. The threshold model fits the experimental Z-scanmore » data better than the non-threshold model. Output characteristics were obtained that integrally describe the nonlinear properties of the optical limiters.« less
Świąder, Mariusz J; Paruszewski, Ryszard; Łuszczki, Jarogniew J
2016-04-01
The aim of this study was to assess the anticonvulsant potency of 6 various benzylamide derivatives [i.e., nicotinic acid benzylamide (Nic-BZA), picolinic acid 2-fluoro-benzylamide (2F-Pic-BZA), picolinic acid benzylamide (Pic-BZA), (RS)-methyl-alanine-benzylamide (Me-Ala-BZA), isonicotinic acid benzylamide (Iso-Nic-BZA), and (R)-N-methyl-proline-benzylamide (Me-Pro-BZA)] in the threshold for maximal electroshock (MEST)-induced seizures in mice. Electroconvulsions (seizure activity) were produced in mice by means of a current (sine-wave, 50Hz, 500V, strength from 4 to 18mA, ear-clip electrodes, 0.2-s stimulus duration, tonic hindlimb extension taken as the endpoint). Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA administered systemically (ip) in a dose-dependent manner increase the threshold for maximal electroconvulsions in mice. Linear regression analysis of Nic-BZA, 2F-Pic-BZA, Pic-BZA, MeAla-BZA, IsoNic-BZA, and Me-Pro-BZA doses and their corresponding threshold increases allowed determining threshold increasing doses by 20% (TID20 values) that elevate the threshold in drug-treated animals over the threshold in control animals. The experimentally derived TID20 values in the MEST test for Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA were 7.45mg/kg, 7.72mg/kg, 8.74mg/kg, 15.11mg/kg, 21.95mg/kg and 28.06mg/kg, respectively. The studied benzylamide derivatives can be arranged with respect to their anticonvulsant potency in the MEST test as follows: Nic-BZA>2F-Pic-BZA>Pic-BZA>Me-Ala-BZA>Iso-Nic-BZA>Me-Pro-BZA. Copyright © 2015 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
NASA Astrophysics Data System (ADS)
Pleijel, H.; Danielsson, H.; Emberson, L.; Ashmore, M. R.; Mills, G.
Applications of a parameterised Jarvis-type multiplicative stomatal conductance model with data collated from open-top chamber experiments on field grown wheat and potato were used to derive relationships between relative yield and stomatal ozone uptake. The relationships were based on thirteen experiments from four European countries for wheat and seven experiments from four European countries for potato. The parameterisation of the conductance model was based both on an extensive literature review and primary data. Application of the stomatal conductance models to the open-top chamber experiments resulted in improved linear regressions between relative yield and ozone uptake compared to earlier stomatal conductance models, both for wheat ( r2=0.83) and potato ( r2=0.76). The improvement was largest for potato. The relationships with the highest correlation were obtained using a stomatal ozone flux threshold. For both wheat and potato the best performing exposure index was AF st6 (accumulated stomatal flux of ozone above a flux rate threshold of 6 nmol ozone m -2 projected sunlit leaf area, based on hourly values of ozone flux). The results demonstrate that flux-based models are now sufficiently well calibrated to be used with confidence to predict the effects of ozone on yield loss of major arable crops across Europe. Further studies, using innovations in stomatal conductance modelling and plant exposure experimentation, are needed if these models are to be further improved.
Narotam, Pradeep K; Morrison, John F; Schmidt, Michael D; Nathoo, Narendra
2014-04-01
Predictive modeling of emergent behavior, inherent to complex physiological systems, requires the analysis of large complex clinical data streams currently being generated in the intensive care unit. Brain tissue oxygen protocols have yielded outcome benefits in traumatic brain injury (TBI), but the critical physiological thresholds for low brain oxygen have not been established for a dynamical patho-physiological system. High frequency, multi-modal clinical data sets from 29 patients with severe TBI who underwent multi-modality neuro-clinical care monitoring and treatment with a brain oxygen protocol were analyzed. The inter-relationship between acute physiological parameters was determined using symbolic regression (SR) as the computational framework. The mean patient age was 44.4±15 with a mean admission GCS of 6.6±3.9. Sixty-three percent sustained motor vehicle accidents and the most common pathology was intra-cerebral hemorrhage (50%). Hospital discharge mortality was 21%, poor outcome occurred in 24% of patients, and good outcome occurred in 56% of patients. Criticality for low brain oxygen was intracranial pressure (ICP) ≥22.8 mm Hg, for mortality at ICP≥37.1 mm Hg. The upper therapeutic threshold for cerebral perfusion pressure (CPP) was 75 mm Hg. Eubaric hyperoxia significantly impacted partial pressure of oxygen in brain tissue (PbtO2) at all ICP levels. Optimal brain temperature (Tbr) was 34-35°C, with an adverse effect when Tbr≥38°C. Survivors clustered at [Formula: see text] Hg vs. non-survivors [Formula: see text] 18 mm Hg. There were two mortality clusters for ICP: High ICP/low PbtO2 and low ICP/low PbtO2. Survivors maintained PbtO2 at all ranges of mean arterial pressure in contrast to non-survivors. The final SR equation for cerebral oxygenation is: [Formula: see text]. The SR-model of acute TBI advances new physiological thresholds or boundary conditions for acute TBI management: PbtO2≥25 mmHg; ICP≤22 mmHg; CPP≈60-75 mmHg; and Tbr≈34-37°C. SR is congruous with the emerging field of complexity science in the modeling of dynamical physiological systems, especially during pathophysiological states. The SR model of TBI is generalizable to known physical laws. This increase in entropy reduces uncertainty and improves predictive capacity. SR is an appropriate computational framework to enable future smart monitoring devices.
Stock, Matt S; Mota, Jacob A
2017-12-01
Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Motor Unit Interpulse Intervals During High Force Contractions.
Stock, Matt S; Thompson, Brennan J
2016-01-01
We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.
Injury tolerance and moment response of the knee joint to combined valgus bending and shear loading.
Bose, Dipan; Bhalla, Kavi S; Untaroiu, Costin D; Ivarsson, B Johan; Crandall, Jeff R; Hurwitz, Shepard
2008-06-01
Valgus bending and shearing of the knee have been identified as primary mechanisms of injuries in a lateral loading environment applicable to pedestrian-car collisions. Previous studies have reported on the structural response of the knee joint to pure valgus bending and lateral shearing, as well as the estimated injury thresholds for the knee bending angle and shear displacement based on experimental tests. However, epidemiological studies indicate that most knee injuries are due to the combined effects of bending and shear loading. Therefore, characterization of knee stiffness for combined loading and the associated injury tolerances is necessary for developing vehicle countermeasures to mitigate pedestrian injuries. Isolated knee joint specimens (n=40) from postmortem human subjects were tested in valgus bending at a loading rate representative of a pedestrian-car impact. The effect of lateral shear force combined with the bending moment on the stiffness response and the injury tolerances of the knee was concurrently evaluated. In addition to the knee moment-angle response, the bending angle and shear displacement corresponding to the first instance of primary ligament failure were determined in each test. The failure displacements were subsequently used to estimate an injury threshold function based on a simplified analytical model of the knee. The validity of the determined injury threshold function was subsequently verified using a finite element model. Post-test necropsy of the knees indicated medial collateral ligament injury consistent with the clinical injuries observed in pedestrian victims. The moment-angle response in valgus bending was determined at quasistatic and dynamic loading rates and compared to previously published test data. The peak bending moment values scaled to an average adult male showed no significant change with variation in the superimposed shear load. An injury threshold function for the knee in terms of bending angle and shear displacement was determined by performing regression analysis on the experimental data. The threshold values of the bending angle (16.2 deg) and shear displacement (25.2 mm) estimated from the injury threshold function were in agreement with previously published knee injury threshold data. The continuous knee injury function expressed in terms of bending angle and shear displacement enabled injury prediction for combined loading conditions such as those observed in pedestrian-car collisions.
Boyne, Pierce; Buhr, Sarah; Rockwell, Bradley; Khoury, Jane; Carl, Daniel; Gerson, Myron; Kissela, Brett; Dunning, Kari
2015-10-01
Treadmill aerobic exercise improves gait, aerobic capacity, and cardiovascular health after stroke, but a lack of specificity in current guidelines could lead to underdosing or overdosing of aerobic intensity. The ventilatory threshold (VT) has been recommended as an optimal, specific starting point for continuous aerobic exercise. However, VT measurement is not available in clinical stroke settings. Therefore, the purpose of this study was to identify an accurate method to predict heart rate at the VT (HRVT) for use as a surrogate for VT. A cross-sectional design was employed. Using symptom-limited graded exercise test (GXT) data from 17 subjects more than 6 months poststroke, prediction methods for HRVT were derived by traditional target HR calculations (percentage of HRpeak achieved during GXT, percentage of peak HR reserve [HRRpeak], percentage of age-predicted maximal HR, and percentage of age-predicted maximal HR reserve) and by regression analysis. The validity of the prediction methods was then tested among 8 additional subjects. All prediction methods were validated by the second sample, so data were pooled to calculate refined prediction equations. HRVT was accurately predicted by 80% HRpeak (R, 0.62; standard deviation of error [SDerror], 7 bpm), 62% HRRpeak (R, 0.66; SDerror, 7 bpm), and regression models that included HRpeak (R, 0.62-0.75; SDerror, 5-6 bpm). Derived regression equations, 80% HRpeak and 62% HRRpeak, provide a specific target intensity for initial aerobic exercise prescription that should minimize underdosing and overdosing for persons with chronic stroke. The specificity of these methods may lead to more efficient and effective treatment for poststroke deconditioning.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A114).
Poly I:C-induced fever elevates threshold for shivering but reduces thermosensitivity in rabbits.
Tøien, Ø; Mercer, J B
1995-05-01
Shivering threshold and thermosensitivity were determined in six conscious rabbits at ambient temperature (Ta) 20 and 10 degrees C before and at six different times after saline injection (0.15 ml iv) and polyriboinosinic-polyribocytidylic acid (poly I:C)-induced fever (5 micrograms/kg iv). Thermosensitivity was calculated by regression of metabolic heat production (M) and hypothalamic temperature (Thypo) during short periods (5-10 min) of square-wave cooling. Heat was extracted with a chronically implanted intravascular heat exchanger. Shivering threshold was calculated as the Thypo at which the thermosensitivity line crossed resting M as measured in afebrile animals at Ta 20 degrees C. There were negligible changes in shivering threshold and thermosensitivity in saline-injected rabbits. In the febrile animals, shivering threshold generally followed the shape of the biphasic fever response. At Ta 20 degrees C, shivering threshold was higher than regulated Thypo during the initial rising phase of fever and was lower during recovery. At Ta 10 degrees C the shivering thresholds were always higher than regulated Thypo except during recovery. Thermosensitivity was reduced by 30-41% during fever.
On the degrees of freedom of reduced-rank estimators in multivariate regression
Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.
2015-01-01
Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui
2015-12-01
This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.
Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim
2015-08-01
Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.
Medicaid payment rates, case-mix reimbursement, and nursing home staffing--1996-2004.
Feng, Zhanlian; Grabowski, David C; Intrator, Orna; Zinn, Jacqueline; Mor, Vincent
2008-01-01
We examined the impact of state Medicaid payment rates and case-mix reimbursement on direct care staffing levels in US nursing homes. We used a recent time series of national nursing home data from the Online Survey Certification and Reporting system for 1996-2004, merged with annual state Medicaid payment rates and case-mix reimbursement information. A 5-category response measure of total staffing levels was defined according to expert recommended thresholds, and examined in a multinomial logistic regression model. Facility fixed-effects models were estimated separately for Registered Nurse (RN), Licensed Practical Nurse (LPN), and Certified Nurse Aide (CNA) staffing levels measured as average hours per resident day. Higher Medicaid payment rates were associated with increases in total staffing levels to meet a higher recommended threshold. However, these gains in overall staffing were accompanied by a reduction of RN staffing and an increase in both LPN and CNA staffing levels. Under case-mix reimbursement, the likelihood of nursing homes achieving higher recommended staffing thresholds decreased, as did levels of professional staffing. Independent of the effects of state, market, and facility characteristics, there was a significant downward trend in RN staffing and an upward trend in both LPN and CNA staffing. Although overall staffing may increase in response to more generous Medicaid reimbursement, it may not translate into improvements in the skill mix of staff. Adjusting for reimbursement levels and resident acuity, total staffing has not increased after the implementation of case-mix reimbursement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
This paper investigates the quantitative relationship of ionizing radiation to the occurrence of posterior lenticular opacities among the survivors of the atomic bombings of Hiroshima and Nagasaki suggested by the DS86 dosimetry system. DS86 doses are available for 1983 (93.4%) of the 2124 atomic bomb survivors analyzed in 1982. The DS86 kerma neutron component for Hiroshima survivors is much smaller than its comparable T65DR component, but still 4.2-fold higher (0.38 Gy at 6 Gy) than that in Nagasaki (0.09 Gy at 6 Gy). Thus, if the eye is especially sensitive to neutrons, there may yet be some useful information onmore » their effects, particularly in Hiroshima. The dose-response relationship has been evaluated as a function of the separately estimated gamma-ray and neutron doses. Among several different dose-response models without and with two thresholds, we have selected as the best model the one with the smallest x2 or the largest log likelihood value associated with the goodness of fit. The best fit is a linear gamma-linear neutron relationship which assumes different thresholds for the two types of radiation. Both gamma and neutron regression coefficients for the best fitting model are positive and highly significant for the estimated DS86 eye organ dose.« less
Gabbett, Tim J
2010-10-01
Limited information exists on the training dose-response relationship in elite collision sport athletes. In addition, no study has developed an injury prediction model for collision sport athletes. The purpose of this study was to develop an injury prediction model for noncontact, soft-tissue injuries in elite collision sport athletes. Ninety-one professional rugby league players participated in this 4-year prospective study. This study was conducted in 2 phases. Firstly, training load and injury data were prospectively recorded over 2 competitive seasons in elite collision sport athletes. Training load and injury data were modeled using a logistic regression model with a binomial distribution (injury vs. no injury) and logit link function. Secondly, training load and injury data were prospectively recorded over a further 2 competitive seasons in the same cohort of elite collision sport athletes. An injury prediction model based on planned and actual training loads was developed and implemented to determine if noncontact, soft-tissue injuries could be predicted and therefore prevented in elite collision sport athletes. Players were 50-80% likely to sustain a preseason injury within the training load range of 3,000-5,000 units. These training load 'thresholds' were considerably reduced (1,700-3,000 units) in the late-competition phase of the season. A total of 159 noncontact, soft-tissue injuries were sustained over the latter 2 seasons. The percentage of true positive predictions was 62.3% (n = 121), whereas the total number of false positive and false negative predictions was 20 and 18, respectively. Players that exceeded the training load threshold were 70 times more likely to test positive for noncontact, soft-tissue injury, whereas players that did not exceed the training load threshold were injured 1/10 as often. These findings provide information on the training dose-response relationship and a scientific method of monitoring and regulating training load in elite collision sport athletes.
A biometeorological model of an encephalitis vector
NASA Astrophysics Data System (ADS)
Raddatz, R. L.
1986-01-01
Multiple linear regression techniques and seven years of data were used to build a biometeorological model of Winnipeg's mean daily levels of Culex tarsalis Coquillett. An eighth year of data was used to test the model. Hydrologic accounting of precipitation, evapotranspiration and runoff provided estimates of wetness while the warmness of the season was gauged in terms of the average temperature difference from normal and a threshold antecedent temperature regime. These factors were found to be highly correlated with the time-series of Cx. tarsalis counts. The impact of mosquito adulticiding measures was included in the model via a control effectiveness parameter. An activity-level adjustment, based on mean daily temperatures, was also made to the counts. This model can, by monitoring the weather, provide forecasts of Cx. tarsalis populations for Winnipeg with a lead-time of three weeks, thereby, contributing to an early warning of an impending Western Equine Encephalitis outbreak.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
NASA Astrophysics Data System (ADS)
Hoffman, A.; Forest, C. E.; Kemanian, A.
2016-12-01
A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with traditional crop models, but likely at the cost of removing climate information. Our random forest models consistently discover the positive trend without removing any additional data. The application of random forests as a statistical crop model provides insight into understanding the impact of dust on yields in marginal food producing regions.
Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan
2016-03-01
Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p < 0.008, r(2) = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.
Carbonell, F; Bellec, P; Shmuel, A
2014-02-01
The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their impact on functional connectivity in the resting state. © 2013.
Seasonal variation in sports participation.
Schüttoff, Ute; Pawlowski, Tim
2018-02-01
This study explores indicators describing socio-demographics, sports participation characteristics and motives which are associated with variation in sports participation across seasons. Data were drawn from the German Socio-Economic Panel which contains detailed information on the sports behaviour of adults in Germany. Overall, two different measures of seasonal variation are developed and used as dependent variables in our regression models. The first variable measures the coefficient of (seasonal) variation in sport-related energy expenditure per week. The second variable measures whether activity drops below the threshold as defined by the World Health Organization (WHO). Results suggest that the organisational setting, the intensity and number of sports practised, and the motive for participation are strongly correlated with the variation measures used. For example, both, participation in a sports club and a commercial facility, are associated with reduced seasonal variation and a significantly higher probability of participating at a volume above the WHO threshold across all seasons. These findings give some impetus for policymaking and the planning of sports programmes as well as future research directions.
Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.
Houpt, Joseph W; Bittner, Jennifer L
2018-07-01
Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Mandelbaum, Tal; Lee, Joon; Scott, Daniel J; Mark, Roger G; Malhotra, Atul; Howell, Michael D; Talmor, Daniel
2013-03-01
The observation periods and thresholds of serum creatinine and urine output defined in the Acute Kidney Injury Network (AKIN) criteria were not empirically derived. By continuously varying creatinine/urine output thresholds as well as the observation period, we sought to investigate the empirical relationships among creatinine, oliguria, in-hospital mortality, and receipt of renal replacement therapy (RRT). Using a high-resolution database (Multiparameter Intelligent Monitoring in Intensive Care II), we extracted data from 17,227 critically ill patients with an in-hospital mortality rate of 10.9 %. The 14,526 patients had urine output measurements. Various combinations of creatinine/urine output thresholds and observation periods were investigated by building multivariate logistic regression models for in-hospital mortality and RRT predictions. For creatinine, both absolute and percentage increases were analyzed. To visualize the dependence of adjusted mortality and RRT rate on creatinine, the urine output, and the observation period, we generated contour plots. Mortality risk was high when absolute creatinine increase was high regardless of the observation period, when percentage creatinine increase was high and the observation period was long, and when oliguria was sustained for a long period of time. Similar contour patterns emerged for RRT. The variability in predictive accuracy was small across different combinations of thresholds and observation periods. The contour plots presented in this article complement the AKIN definition. A multi-center study should confirm the universal validity of the results presented in this article.
Identifying a rainfall event threshold triggering herbicide leaching by preferential flow
NASA Astrophysics Data System (ADS)
McGrath, G. S.; Hinz, C.; Sivapalan, M.; Dressel, J.; Pütz, T.; Vereecken, H.
2010-02-01
How can leaching risk be assessed if the chemical flux and/or the toxicity is highly uncertain? For many strongly sorbing pesticides it is known that their transport through the unsaturated zone occurs intermittently through preferential flow, triggered by significant rainfall events. In these circumstances the timing and frequency of these rainfall events may allow quantification of leaching risk to overcome the limitations of flux prediction. In this paper we analyze the leaching behavior of bromide and two herbicides, methabenzthiazuron and ethidimuron, using data from twelve uncropped lysimeters, with high-resolution climate data, in order to identify the rainfall controls on rapid solute leaching. A regression tree analysis suggested that a coarse-scale fortnightly to monthly water balance was a good predictor of short-term increases in drainage and bromide transport. Significant short-term herbicide leaching, however, was better predicted by the occurrence of a single storm with a depth greater than a 19 mm threshold. Sampling periods where rain events exceeded this threshold accounted for between 38% and 56% of the total mass of herbicides leached during the experiment. The same threshold only accounted for between 1% and 10% of the total mass of bromide leached. On the basis of these results, we conclude that in this system, the leaching risks of strongly sorbing chemicals can be quantified by the timing and frequency of these large rainfall events. Empirical and modeling approaches are suggested to apply this frequentist approach to leaching risk assessment to other soil-climate systems.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Economic Injury Level of the Neotropical Brown Stink Bug Euschistus heros (F.) on Cotton Plants.
Soria, M F; Degrande, P E; Panizzi, A R; Toews, M D
2017-06-01
In Brazil, the Neotropical brown stink bug, Euschistus heros (F.) (Hemiptera: Pentatomidae), commonly disperses from soybeans to cotton fields. The establishment of an economic treatment threshold for this pest on cotton crops is required. Infestation levels of adults of E. heros were evaluated on cotton plants at preflowering, early flowering, boll filling, and full maturity by assessing external and internal symptoms of injury on bolls, seed cotton/lint production, and fiber quality parameters. A completely randomized experiment was designed to infest cotton plants in a greenhouse with 0, 2, 4, 6, and 8 bugs/plant, except at the full-maturity stage in which only infestation with 8 bugs/plant and uninfested plants were evaluated. Results indicated that the preflowering, early-flowering, and full-maturity stages were not affected by E. heros. A linear regression model showed a significant increase in the number of internal punctures and warts in the boll-filling stage as the population of bugs increased. The average number of loci with mottled immature fibers was significantly higher at 4, 6, and 8 bugs compared with uninfested plants with data following a quadratic regression model. The seed and lint cotton was reduced by 18 and 25% at the maximum level of infestation (ca. 8 bugs/plant) in the boll-filling stage. The micronaire and yellowing indexes were, respectively, reduced and increased with the increase of the infestation levels. The economic injury level of E. heros on cotton plants at the boll-filling stage was determined as 0.5 adult/plant. Based on that, a treatment threshold of 0.1 adult/plant can be recommended to avoid economic losses.
Urquhart, Andrew G.; Hassett, Afton L.; Tsodikov, Alex; Hallstrom, Brian R.; Wood, Nathan I.; Williams, David A.; Clauw, Daniel J.
2015-01-01
Objective While psychosocial factors have been associated with poorer outcomes after knee and hip arthroplasty, we hypothesized that augmented pain perception, as occurs in conditions such as fibromyalgia, may account for decreased responsiveness to primary knee and hip arthroplasty. Methods A prospective, observational cohort study was conducted. Preoperative phenotyping was conducted using validated questionnaires to assess pain, function, depression, anxiety, and catastrophizing. Participants also completed the 2011 fibromyalgia survey questionnaire, which addresses the widespread body pain and comorbid symptoms associated with characteristics of fibromyalgia. Results Of the 665 participants, 464 were retained 6 months after surgery. Since individuals who met criteria for being classified as having fibromyalgia were expected to respond less favorably, all primary analyses excluded these individuals (6% of the cohort). In the multivariate linear regression model predicting change in knee/hip pain (primary outcome), a higher fibromyalgia survey score was independently predictive of less improvement in pain (estimate −0.25, SE 0.044; P < 0.00001). Lower baseline joint pain scores and knee (versus hip) arthroplasty were also predictive of less improvement (R2 = 0.58). The same covariates were predictive in the multivariate logistic regression model for change in knee/hip pain, with a 17.8% increase in the odds of failure to meet the threshold of 50% improvement for every 1‐point increase in fibromyalgia survey score (P = 0.00032). The fibromyalgia survey score was also independently predictive of change in overall pain and patient global impression of change. Conclusion Our findings indicate that the fibromyalgia survey score is a robust predictor of poorer arthroplasty outcomes, even among individuals whose score falls well below the threshold for the categorical diagnosis of fibromyalgia. PMID:25772388
Predicting arsenic in drinking water wells of the Central Valley, California
Ayotte, Joseph; Nolan, Bernard T.; Gronberg, JoAnn M.
2016-01-01
Probabilities of arsenic in groundwater at depths used for domestic and public supply in the Central Valley of California are predicted using weak-learner ensemble models (boosted regression trees, BRT) and more traditional linear models (logistic regression, LR). Both methods captured major processes that affect arsenic concentrations, such as the chemical evolution of groundwater, redox differences, and the influence of aquifer geochemistry. Inferred flow-path length was the most important variable but near-surface-aquifer geochemical data also were significant. A unique feature of this study was that previously predicted nitrate concentrations in three dimensions were themselves predictive of arsenic and indicated an important redox effect at >10 μg/L, indicating low arsenic where nitrate was high. Additionally, a variable representing three-dimensional aquifer texture from the Central Valley Hydrologic Model was an important predictor, indicating high arsenic associated with fine-grained aquifer sediment. BRT outperformed LR at the 5 μg/L threshold in all five predictive performance measures and at 10 μg/L in four out of five measures. BRT yielded higher prediction sensitivity (39%) than LR (18%) at the 10 μg/L threshold–a useful outcome because a major objective of the modeling was to improve our ability to predict high arsenic areas.
Hsu, Ruey-Fen; Ho, Chi-Kung; Lu, Sheng-Nan; Chen, Shun-Sheng
2010-10-01
An objective investigation is needed to verify the existence and severity of hearing impairments resulting from work-related, noise-induced hearing loss in arbitration of medicolegal aspects. We investigated the accuracy of multiple-frequency auditory steady-state responses (Mf-ASSRs) between subjects with sensorineural hearing loss (SNHL) with and without occupational noise exposure. Cross-sectional study. Tertiary referral medical centre. Pure-tone audiometry and Mf-ASSRs were recorded in 88 subjects (34 patients had occupational noise-induced hearing loss [NIHL], 36 patients had SNHL without noise exposure, and 18 volunteers were normal controls). Inter- and intragroup comparisons were made. A predicting equation was derived using multiple linear regression analysis. ASSRs and pure-tone thresholds (PTTs) showed a strong correlation for all subjects (r = .77 ≈ .94). The relationship is demonstrated by the equationThe differences between the ASSR and PTT were significantly higher for the NIHL group than for the subjects with non-noise-induced SNHL (p < .001). Mf-ASSR is a promising tool for objectively evaluating hearing thresholds. Predictive value may be lower in subjects with occupational hearing loss. Regardless of carrier frequencies, the severity of hearing loss affects the steady-state response. Moreover, the ASSR may assist in detecting noise-induced injury of the auditory pathway. A multiple linear regression equation to accurately predict thresholds was shown that takes into consideration all effect factors.
Ye, Xin; Beck, Travis W; DeFreitas, Jason M; Wages, Nathan P
2015-04-01
The aim of this study was to compare the acute effects of concentric versus eccentric exercise on motor control strategies. Fifteen men performed six sets of 10 repetitions of maximal concentric exercises or eccentric isokinetic exercises with their dominant elbow flexors on separate experimental visits. Before and after the exercise, maximal strength testing and submaximal trapezoid isometric contractions (40% of the maximal force) were performed. Both exercise conditions caused significant strength loss in the elbow flexors, but the loss was greater following the eccentric exercise (t=2.401, P=.031). The surface electromyographic signals obtained from the submaximal trapezoid isometric contractions were decomposed into individual motor unit action potential trains. For each submaximal trapezoid isometric contraction, the relationship between the average motor unit firing rate and the recruitment threshold was examined using linear regression analysis. In contrast to the concentric exercise, which did not cause significant changes in the mean linear slope coefficient and y-intercept of the linear regression line, the eccentric exercise resulted in a lower mean linear slope and an increased mean y-intercept, thereby indicating that increasing the firing rates of low-threshold motor units may be more important than recruiting high-threshold motor units to compensate for eccentric exercise-induced strength loss. Copyright © 2014 Elsevier B.V. All rights reserved.
Qin, Gang; Bian, Zhao-Lian; Shen, Yi; Zhang, Lei; Zhu, Xiao-Hong; Liu, Yan-Mei; Shao, Jian-Guo
2016-06-04
Several models have been proposed to predict the short-term outcome of acute-on-chronic liver failure (ACLF) after treatment. We aimed to determine whether better decisions for artificial liver support system (ALSS) treatment could be made with a model than without, through decision curve analysis (DCA). The medical profiles of a cohort of 232 patients with hepatitis B virus (HBV)-associated ACLF were retrospectively analyzed to explore the role of plasma prothrombin activity (PTA), model for end-stage liver disease (MELD) and logistic regression model (LRM) in identifying patients who could benefit from ALSS. The accuracy and reliability of PTA, MELD and LRM were evaluated with previously reported cutoffs. DCA was performed to evaluate the clinical role of these models in predicting the treatment outcome. With the cut-off value of 0.2, LRM had sensitivity of 92.6 %, specificity of 42.3 % and an area under the receiving operating characteristic curve (AUC) of 0.68, which showed superior discrimination over PTA and MELD. DCA revealed that the LRM-guided ALSS treatment was superior over other strategies including "treating all" and MELD-guided therapy, for the midrange threshold probabilities of 16 to 64 %. The use of LRM-guided ALSS treatment could increase both the accuracy and efficiency of this procedure, allowing the avoidance of unnecessary ALSS.
Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T
2018-05-01
We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.
Predicting the susceptibility to gully initiation in data-poor regions
NASA Astrophysics Data System (ADS)
Dewitte, Olivier; Daoudi, Mohamed; Bosco, Claudio; Van Den Eeckhaut, Miet
2015-01-01
Permanent gullies are common features in many landscapes and quite often they represent the dominant soil erosion process. Once a gully has initiated, field evidence shows that gully channel formation and headcut migration rapidly occur. In order to prevent the undesired effects of gullying, there is a need to predict the places where new gullies might initiate. From detailed field measurements, studies have demonstrated strong inverse relationships between slope gradient of the soil surface (S) and drainage area (A) at the point of channel initiation across catchments in different climatic and morphological environments. Such slope-area thresholds (S-A) can be used to predict locations in the landscape where gullies might initiate. However, acquiring S-A requires detailed field investigations and accurate high resolution digital elevation data, which are usually difficult to acquire. To circumvent this issue, we propose a two-step method that uses published S-A thresholds and a logistic regression analysis (LR). S-A thresholds from the literature are used as proxies of field measurement. The method is calibrated and validated on a watershed, close to the town of Algiers, northern Algeria, where gully erosion affects most of the slopes. The gullies extend up to several kilometres in length and cover 16% of the study area. First we reconstruct the initiation areas of the existing gullies by applying S-A thresholds for similar environments. Then, using the initiation area map as the dependent variable with combinations of topographic and lithological predictor variables, we calibrate several LR models. It provides relevant results in terms of statistical reliability, prediction performance, and geomorphological significance. This method using S-A thresholds with data-driven assessment methods like LR proves to be efficient when applied to common spatial data and establishes a methodology that will allow similar studies to be undertaken elsewhere.
Quantitative somatosensory testing of the penis: optimizing the clinical neurological examination.
Bleustein, Clifford B; Eckholdt, Haftan; Arezzo, Joseph C; Melman, Arnold
2003-06-01
Quantitative somatosensory testing, including vibration, pressure, spatial perception and thermal thresholds of the penis, has demonstrated neuropathy in patients with a history of erectile dysfunction of all etiologies. We evaluated which measurement of neurological function of the penis was best at predicting erectile dysfunction and examined the impact of location on the penis for quantitative somatosensory testing measurements. A total of 107 patients were evaluated. All patients were required to complete the erectile function domain of the International Index of Erectile Function (IIEF) questionnaire, of whom 24 had no complaints of erectile dysfunction and scored within the "normal" range on the IIEF. Patients were subsequently tested on ventral middle penile shaft, proximal dorsal midline penile shaft and glans penis (with foreskin retracted) for vibration, pressure, spatial perception, and warm and cold thermal thresholds. Mixed models repeated measures analysis of variance controlling for age, diabetes and hypertension revealed that method of measurement (quantitative somatosensory testing) was predictive of IIEF score (F = 209, df = 4,1315, p <0.001), while site of measurement on the penis was not. To determine the best method of measurement, we used hierarchical regression, which revealed that warm temperature was the best predictor of erectile dysfunction with pseudo R(2) = 0.19, p <0.0007. There was no significant improvement in predicting erectile dysfunction when another test was added. Using 37C and greater as the warm thermal threshold yielded a sensitivity of 88.5%, specificity 70.0% and positive predictive value 85.5%. Quantitative somatosensory testing using warm thermal threshold measurements taken at the glans penis can be used alone to assess the neurological status of the penis. Warm thermal thresholds alone offer a quick, noninvasive accurate method of evaluating penile neuropathy in an office setting.
Magnesium Sulfate Only Slightly Reduces the Shivering Threshold in Humans
Wadhwa, Anupama; Sengupta, Papiya; Durrani, Jaleel; Akça, Ozan; Lenhardt, Rainer; Sessler, Daniel I.
2005-01-01
Background: Hypothermia may be an effective treatment for stroke or acute myocardial infarction; however, it provokes vigorous shivering, which causes potentially dangerous hemodynamic responses and prevents further hypothermia. Magnesium is an attractive antishivering agent because it is used for treatment of postoperative shivering and provides protection against ischemic injury in animal models. We tested the hypothesis that magnesium reduces the threshold (triggering core temperature) and gain of shivering without substantial sedation or muscle weakness. Methods: We studied nine healthy male volunteers (18-40 yr) on two randomly assigned treatment days: 1) Control and 2) Magnesium (80 mg·kg-1 followed by infusion at 2 g·h-1). Lactated Ringer's solution (4°C) was infused via a central venous catheter over a period of approximately 2 hours to decrease tympanic membrane temperature ≈1.5°C·h-1. A significant and persistent increase in oxygen consumption identified the threshold. The gain of shivering was determined by the slope of oxygen consumption vs. core temperature regression. Sedation was evaluated using verbal rating score (VRS, 0-10) and bispectral index of the EEG (BIS). Peripheral muscle strength was evaluated using dynamometry and spirometry. Data were analyzed using repeated-measures ANOVA; P<0.05 was statistically significant. Results: Magnesium reduced the shivering threshold (36.3±0.4 [mean±SD] vs. 36.6±0.3°C, P=0.040). It did not affect the gain of shivering (Control: 437±289, Magnesium: 573±370 ml·min-1·°C-1, P=0.344). The magnesium bolus did not produce significant sedation or appreciably reduce muscle strength. Conclusions: Magnesium significantly reduced the shivering threshold; however, due to the modest absolute reduction, this finding is considered to be clinically unimportant for induction of therapeutic hypothermia. PMID:15749735
Iler, Amy M; Høye, Toke T; Inouye, David W; Schmidt, Niels M
2013-08-19
Many alpine and subalpine plant species exhibit phenological advancements in association with earlier snowmelt. While the phenology of some plant species does not advance beyond a threshold snowmelt date, the prevalence of such threshold phenological responses within plant communities is largely unknown. We therefore examined the shape of flowering phenology responses (linear versus nonlinear) to climate using two long-term datasets from plant communities in snow-dominated environments: Gothic, CO, USA (1974-2011) and Zackenberg, Greenland (1996-2011). For a total of 64 species, we determined whether a linear or nonlinear regression model best explained interannual variation in flowering phenology in response to increasing temperatures and advancing snowmelt dates. The most common nonlinear trend was for species to flower earlier as snowmelt advanced, with either no change or a slower rate of change when snowmelt was early (average 20% of cases). By contrast, some species advanced their flowering at a faster rate over the warmest temperatures relative to cooler temperatures (average 5% of cases). Thus, some species seem to be approaching their limits of phenological change in response to snowmelt but not temperature. Such phenological thresholds could either be a result of minimum springtime photoperiod cues for flowering or a slower rate of adaptive change in flowering time relative to changing climatic conditions.
Rao, Leela E.; Matchett, John R.; Brooks, Matthew L.; Johns, Robert; Minnich, Richard A.; Allen, Edith B.
2014-01-01
Although precipitation is correlated with fire size in desert ecosystems and is typically used as an indirect surrogate for fine fuel load, a direct link between fine fuel biomass and fire size has not been established. In addition, nitrogen (N) deposition can affect fire risk through its fertilisation effect on fine fuel production. In this study, we examine the relationships between fire size and precipitation, N deposition and biomass with emphasis on identifying biomass and N deposition thresholds associated with fire spreading across the landscape. We used a 28-year fire record of 582 burns from low-elevation desert scrub to evaluate the relationship of precipitation, N deposition and biomass with the distribution of fire sizes using quantile regression. We found that models using annual biomass have similar predictive ability to those using precipitation and N deposition at the lower to intermediate portions of the fire size distribution. No distinct biomass threshold was found, although within the 99th percentile of the distribution fire size increased with greater than 125 g m–2 of winter fine fuel production. The study did not produce an N deposition threshold, but did validate the value of 125 g m–2 of fine fuel for spread of fires.
Kim, D.G.; Ferris, H.
2002-01-01
To determine the economic threshold level, oriental melon (Cucumis melo L. cv. Geumssaragi-euncheon) grafted on Shintozoa (Cucurbita maxima × Cu. moschata) was planted in plots (2 × 3 m) under a plastic film in February with a range of initial population densities (Pi) of Meloidogyne arenaria. The relationships of early, late, and total yield to Pi measured in September and January were adequately described by both linear regression and the Seinhorst damage model. Initial nematode densities in September in excess of 14 second-stage juveniles (J2)/100 cm³ soil caused losses in total yields that exceeded the economic threshold and indicate the need for fosthiazate nematicide treatment at current costs. Differences in yield-loss relationships to Pi between early- and late-season harvests enhance the resolution of the management decision and suggest approaches for optimizing returns. Determination of population levels for advisory purposes can be based on assay samples taken several months before planting, which allows time for implementation of management procedures. We introduce (i) an amendment of the economic threshold definition to reflect efficacy of the nematode management procedure under consideration, and (ii) the concept of profit limit as the nematode population at which net returns from the system will become negative. PMID:19265907
QSAR Modeling of Rat Acute Toxicity by Oral Exposure
Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander
2009-01-01
Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371
Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.
Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander
2009-12-01
Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.
Defining a Cancer Dependency Map.
Tsherniak, Aviad; Vazquez, Francisca; Montgomery, Phil G; Weir, Barbara A; Kryukov, Gregory; Cowley, Glenn S; Gill, Stanley; Harrington, William F; Pantel, Sasha; Krill-Burger, John M; Meyers, Robin M; Ali, Levi; Goodale, Amy; Lee, Yenarae; Jiang, Guozhi; Hsiao, Jessica; Gerath, William F J; Howell, Sara; Merkel, Erin; Ghandi, Mahmoud; Garraway, Levi A; Root, David E; Golub, Todd R; Boehm, Jesse S; Hahn, William C
2017-07-27
Most human epithelial tumors harbor numerous alterations, making it difficult to predict which genes are required for tumor survival. To systematically identify cancer dependencies, we analyzed 501 genome-scale loss-of-function screens performed in diverse human cancer cell lines. We developed DEMETER, an analytical framework that segregates on- from off-target effects of RNAi. 769 genes were differentially required in subsets of these cell lines at a threshold of six SDs from the mean. We found predictive models for 426 dependencies (55%) by nonlinear regression modeling considering 66,646 molecular features. Many dependencies fall into a limited number of classes, and unexpectedly, in 82% of models, the top biomarkers were expression based. We demonstrated the basis behind one such predictive model linking hypermethylation of the UBB ubiquitin gene to a dependency on UBC. Together, these observations provide a foundation for a cancer dependency map that facilitates the prioritization of therapeutic targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
2017-01-01
Objective The aim of this study was to build a model to predict the risk of lymphovascular space invasion (LVSI) in women with endometrial cancer (EC). Methods From December 2010 to June 2013, 211 patients with EC undergoing surgery at Shanghai First Maternity and Infant Hospital were enrolled in this retrospective study. Those patients were divided into a positive LVSI group and a negative LVSI group. The clinical and pathological characteristics were compared between the two groups; logistic regression was used to explore risk factors associated with LVSI occurrence. The threshold values of significant factors were calculated to build a risk model and predict LVSI. Results There were 190 patients who were negative for LVSI and 21 patients were positive for LVSI out of 211 patients with EC. It was found that tumor grade, depth of myometrial invasion, number of pelvic lymph nodes, and International Federation of Gynecology and Obstetrics (FIGO) stage (p<0.05) were associated with LVSI occurrence. However, cervical involvement and age (p>0.05) were not associated with LVSI. Receiver operating characteristic (ROC) curves revealed that the threshold values of the following factors were correlated with positive LVSI: 28.1 U/mL of CA19-9, 21.2 U/mL of CA125, 2.58 mg/dL of fibrinogen (Fn), 1.84 U/mL of carcinoembryonic antigen (CEA) and (6.35×109)/L of white blood cell (WBC). Logistic regression analysis indicated that CA125 ≥21.2 (p=0.032) and Fn ≥2.58 mg/dL (p=0.014) were significantly associated with LVSI. Conclusion Positive LVSI could be predicted by CA125 ≥21.2 U/mL and Fn ≥2.58 mg/dL in women with EC. It could help gynecologists better adapt surgical staging and adjuvant therapies. PMID:27894164
Bertoli, Simona; Laureati, Monica; Battezzati, Alberto; Bergamaschi, Valentina; Cereda, Emanuele; Spadafranca, Angela; Vignati, Laila; Pagliarini, Ella
2014-01-01
AIM: We investigated the relationship between taste sensitivity, nutritional status and metabolic syndrome and possible implications on weight loss dietary program. METHODS: Sensitivity for bitter, sweet, salty and sour tastes was assessed by the three-Alternative-Forced-Choice method in 41 overweight (OW), 52 obese (OB) patients and 56 normal-weight matched controls. OW and OB were assessed also for body composition (by impedence), resting energy expenditure (by indirect calorimetry) and presence of metabolic syndrome (MetS) and were prescribed a weight loss diet. Compliance to the weight loss dietary program was defined as adherence to control visits and weight loss ≥ 5% in 3 mo. RESULTS: Sex and age-adjusted multiple regression models revealed a significant association between body mass index (BMI) and both sour taste (P < 0.05) and global taste acuity score (GTAS) (P < 0.05), with lower sensitivity with increasing BMI. This trend in sensitivity for sour taste was also confirmed by the model refitted on the OW/OB group while the association with GTAS was marginally significant (P = 0.06). MetS+ subjects presented higher thresholds for salty taste when compared to MetS- patients while no significant difference was detected for the other tastes and GTAS. As assessed by multiple regression model, the association between salty taste and MetS appeared to be independent of sex, age and BMI. Patients continuing the program (n = 37) did not show any difference in baseline taste sensitivity when compared to drop-outs (n = 29). Similarly, no significant difference was detected between patients reporting and not reporting a weight loss ≥ 5% of the initial body weight. No significant difference in taste sensitivity was detected even after dividing patients on the basis of nutritional (OW and OB) or metabolic status (MetS+ and MetS-). CONCLUSION: There is no cause-effect relationship between overweight and metabolic derangements. Taste thresholds assessment is not useful in predicting the outcome of a diet-induced weight loss program. PMID:25317249
Finite mixture modeling for vehicle crash data with application to hotspot identification.
Park, Byung-Jung; Lord, Dominique; Lee, Chungwon
2014-10-01
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.
De Carli, L; Gambino, R; Lubrano, C; Rosato, R; Bongiovanni, D; Lanfranco, F; Broglio, F; Ghigo, E; Bo, S
2017-11-28
Few and contradictory data suggest changes in taste perception in type 2 diabetes (T2DM), potentially altering food choices. We, therefore, analyzed taste recognition thresholds in T2DM patients with good metabolic control and free of conditions potentially impacting on taste, compared with age-, body mass index-, and sex-matched normoglycemic controls. An ascending-concentration method was used, employing sucrose (sweet), sodium chloride (salty), citric acid (sour), and quinine hydrochloride (bitter), diluted in increasing concentration solutions. The recognition threshold was the lowest concentration of correct taste identification. The recognition thresholds for the four tastes were higher in T2DM patients. In a multiple regression model, T2DM [β = 0.95; 95% CI 0.32-1.58; p = 0.004 (salty); β = 0.61; 0.19-1.03; p = 0.006 (sweet); β = 0.78; 0.15-1.40; p = 0.016 (sour); β = 0.74; 0.22-1.25; p = 0.006 (bitter)] and waist circumference [β = 0.05; 0.01-0.08; p = 0.012 (salty); β = 0.03; 0.01-0.05; p = 0.020 (sweet); β = 0.04; 0.01-0.08; p = 0.020 (sour); β = 0.04; 0.01-0.07; p = 0.007 (bitter)] were associated with the recognition thresholds. Age was associated with salty (β = 0.06; 0.01-0.12; p = 0.027) and BMI with sweet thresholds (β = 0.06; 0.01-0.11; p = 0.019). Taste recognition thresholds were higher in uncomplicated T2DM, and central obesity was significantly associated with this impairment. Hypogeusia may be an early sign of diabetic neuropathy and be implicated in the poor compliance of these patients to dietary recommendations.
Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Smits, Cas
2017-11-20
This study determined the effect of hearing loss and English-speaking competency on the South African English digits-in-noise hearing test to evaluate its suitability for use across native (N) and non-native (NN) speakers. A prospective cross-sectional cohort study of N and NN English adults with and without sensorineural hearing loss compared pure-tone air conduction thresholds to the speech reception threshold (SRT) recorded with the smartphone digits-in-noise hearing test. A rating scale was used for NN English listeners' self-reported competence in speaking English. This study consisted of 454 adult listeners (164 male, 290 female; range 16 to 90 years), of whom 337 listeners had a best ear four-frequency pure-tone average (4FPTA; 0.5, 1, 2, and 4 kHz) of ≤25 dB HL. A linear regression model identified three predictors of the digits-in-noise SRT, namely, 4FPTA, age, and self-reported English-speaking competence. The NN group with poor self-reported English-speaking competence (≤5/10) performed significantly (p < 0.01) poorer than the N and NN (≥6/10) groups on the digits-in-noise test. Screening characteristics of the test improved with separate cutoff values depending on English-speaking competence for the N and NN groups (≥6/10) and NN group alone (≤5/10). Logistic regression models, which include age in the analysis, showed a further improvement in sensitivity and specificity for both groups (area under the receiver operating characteristic curve, 0.962 and 0.903, respectively). Self-reported English-speaking competence had a significant influence on the SRT obtained with the smartphone digits-in-noise test. A logistic regression approach considering SRT, self-reported English-speaking competence, and age as predictors of best ear 4FPTA >25 dB HL showed that the test can be used as an accurate hearing screening tool for N and NN English speakers. The smartphone digits-in-noise test, therefore, allows testing in a multilingual population familiar with English digits using dynamic cutoff values that can be chosen according to self-reported English-speaking competence and age.
NASA Astrophysics Data System (ADS)
Brideau, J. M.; Ng, M.; Hoover, J. H.; Hale, R. L.; Thomas, B.; Vogel, R. M.; Northeast ConsortiumHydrologic Synthesis Summer Institute, 2010--Biogeochemistry
2010-12-01
Title: Inventing Wastewater: The Social and Scientific Construction of Effluent in the Northeastern United States Authors: Jeffrey Brideau, Melissa Ng, Joseph Hoover, Rebecca Hale, Brian Thomas, and Richard Vogel Presented by: Jeffrey Brideau B.A., M.A., PhD Candidate, Department of History, University of Maryland Regulation of pollution is a prevalent part of contemporary American society. Scientists and policy makers have established acceptable effluent thresholds, with the ostensible goal of protecting human and stream health. However, this ubiquity of regulation is a recent phenomenon, and institutional mechanisms for effluent control were virtually non-existent in the early 20th century. Nonetheless, these same decades witnessed the emergence of nascent efforts at water pollution abatement. This project aims to explore social and scientific perceptions of wastewater, and begins with the simple premise that socio-cultural values underlay human decision-making in water management, and that wastewater is imbued with a matrix of human values that are continuously renegotiated. So what were the primary motivations for abatement efforts? Were they aesthetic and olfactory, or scientific concern for public and stream health? This paper proposes that there are social as well as scientific thresholds for pollutant loads. Collaborating with a team of interdisciplinary researchers we have created and aggregated discrete data sets to model, using export coefficient and linear regression modeling techniques, historic pollutant loading in the Northeastern United States. Concurrently, we have drawn on historical narratives of agitation by abatement advocates, nuisance laws, regulatory regimes, and changing scientific understanding; and contrasting the modeling results with these narratives allows this project to quantitatively determine where social thresholds lay in relation to their scientific counterparts. This project’s novelty lies in its use of existing narratives of wastewater and remediation efforts in tandem with the scientific quantification of pollutant loads in affected streams. In essence, the success of this project was predicated on the ability of the associated researchers to contribute their expertise, perform collaborative analysis, and, ultimately, produce a product that transcends traditional disciplinary boundaries. This paper represents one facet of that larger project. By determining the social thresholds of pollution loading, and where they converge with, or diverge from scientific thresholds, provides insight into why, when, and where various pollutants became offensive.
Casanova, I; Diaz, A; Pinto, S; de Carvalho, M
2014-04-01
The technique of threshold tracking to test axonal excitability gives information about nodal and internodal ion channel function. We aimed to investigate variability of the motor excitability measurements in healthy controls, taking into account age, gender, body mass index (BMI) and small changes in skin temperature. We examined the left median nerve of 47 healthy controls using the automated threshold-tacking program, QTRAC. Statistical multiple regression analysis was applied to test relationship between nerve excitability measurements and subject variables. Comparisons between genders did not find any significant difference (P>0.2 for all comparisons). Multiple regression analysis showed that motor amplitude decreases with age and temperature, stimulus-response slope decreases with age and BMI, and that accommodation half-time decrease with age and temperature. The changes related to demographic features on TRONDE protocol parameters are small and less important than in conventional nerve conduction studies. Nonetheless, our results underscore the relevance of careful temperature control, and indicate that interpretation of stimulus-response slope and accommodation half-time should take into account age and BMI. In contrast, gender is not of major relevance to axonal threshold findings in motor nerves. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Laboratory test variables useful for distinguishing upper from lower gastrointestinal bleeding.
Tomizawa, Minoru; Shinozaki, Fuminobu; Hasegawa, Rumiko; Shirai, Yoshinori; Motoyoshi, Yasufumi; Sugiyama, Takao; Yamamoto, Shigenori; Ishige, Naoki
2015-05-28
To distinguish upper from lower gastrointestinal (GI) bleeding. Patient records between April 2011 and March 2014 were analyzed retrospectively (3296 upper endoscopy, and 1520 colonoscopy). Seventy-six patients had upper GI bleeding (Upper group) and 65 had lower GI bleeding (Lower group). Variables were compared between the groups using one-way analysis of variance. Logistic regression was performed to identify variables significantly associated with the diagnosis of upper vs lower GI bleeding. Receiver-operator characteristic (ROC) analysis was performed to determine the threshold value that could distinguish upper from lower GI bleeding. Hemoglobin (P = 0.023), total protein (P = 0.0002), and lactate dehydrogenase (P = 0.009) were significantly lower in the Upper group than in the Lower group. Blood urea nitrogen (BUN) was higher in the Upper group than in the Lower group (P = 0.0065). Logistic regression analysis revealed that BUN was most strongly associated with the diagnosis of upper vs lower GI bleeding. ROC analysis revealed a threshold BUN value of 21.0 mg/dL, with a specificity of 93.0%. The threshold BUN value for distinguishing upper from lower GI bleeding was 21.0 mg/dL.
Laboratory test variables useful for distinguishing upper from lower gastrointestinal bleeding
Tomizawa, Minoru; Shinozaki, Fuminobu; Hasegawa, Rumiko; Shirai, Yoshinori; Motoyoshi, Yasufumi; Sugiyama, Takao; Yamamoto, Shigenori; Ishige, Naoki
2015-01-01
AIM: To distinguish upper from lower gastrointestinal (GI) bleeding. METHODS: Patient records between April 2011 and March 2014 were analyzed retrospectively (3296 upper endoscopy, and 1520 colonoscopy). Seventy-six patients had upper GI bleeding (Upper group) and 65 had lower GI bleeding (Lower group). Variables were compared between the groups using one-way analysis of variance. Logistic regression was performed to identify variables significantly associated with the diagnosis of upper vs lower GI bleeding. Receiver-operator characteristic (ROC) analysis was performed to determine the threshold value that could distinguish upper from lower GI bleeding. RESULTS: Hemoglobin (P = 0.023), total protein (P = 0.0002), and lactate dehydrogenase (P = 0.009) were significantly lower in the Upper group than in the Lower group. Blood urea nitrogen (BUN) was higher in the Upper group than in the Lower group (P = 0.0065). Logistic regression analysis revealed that BUN was most strongly associated with the diagnosis of upper vs lower GI bleeding. ROC analysis revealed a threshold BUN value of 21.0 mg/dL, with a specificity of 93.0%. CONCLUSION: The threshold BUN value for distinguishing upper from lower GI bleeding was 21.0 mg/dL. PMID:26034359
Khoshnevis, Sepideh; Craik, Natalie K; Matthew Brothers, R; Diller, Kenneth R
2016-03-01
The goal of this study was to investigate the persistence of cold-induced vasoconstriction following cessation of active skin-surface cooling. This study demonstrates a hysteresis effect that develops between skin temperature and blood perfusion during the cooling and subsequent rewarming period. An Arctic Ice cryotherapy unit (CTU) was applied to the knee region of six healthy subjects for 60 min of active cooling followed by 120 min of passive rewarming. Multiple laser Doppler flowmetry perfusion probes were used to measure skin blood flow (expressed as cutaneous vascular conductance (CVC)). Skin surface cooling produced a significant reduction in CVC (P < 0.001) that persisted throughout the duration of the rewarming period. In addition, there was a hysteresis effect between CVC and skin temperature during the cooling and subsequent rewarming cycle (P < 0.01). Mixed model regression (MMR) showed a significant difference in the slopes of the CVC-skin temperature curves during cooling and rewarming (P < 0.001). Piecewise regression was used to investigate the temperature thresholds for acceleration of CVC during the cooling and rewarming periods. The two thresholds were shown to be significantly different (P = 0.003). The results show that localized cooling causes significant vasoconstriction that continues beyond the active cooling period despite skin temperatures returning toward baseline values. The significant and persistent reduction in skin perfusion may contribute to nonfreezing cold injury (NFCI) associated with cryotherapy.
Khoshnevis, Sepideh; Craik, Natalie K.; Matthew Brothers, R.; Diller, Kenneth R.
2016-01-01
The goal of this study was to investigate the persistence of cold-induced vasoconstriction following cessation of active skin-surface cooling. This study demonstrates a hysteresis effect that develops between skin temperature and blood perfusion during the cooling and subsequent rewarming period. An Arctic Ice cryotherapy unit (CTU) was applied to the knee region of six healthy subjects for 60 min of active cooling followed by 120 min of passive rewarming. Multiple laser Doppler flowmetry perfusion probes were used to measure skin blood flow (expressed as cutaneous vascular conductance (CVC)). Skin surface cooling produced a significant reduction in CVC (P < 0.001) that persisted throughout the duration of the rewarming period. In addition, there was a hysteresis effect between CVC and skin temperature during the cooling and subsequent rewarming cycle (P < 0.01). Mixed model regression (MMR) showed a significant difference in the slopes of the CVC–skin temperature curves during cooling and rewarming (P < 0.001). Piecewise regression was used to investigate the temperature thresholds for acceleration of CVC during the cooling and rewarming periods. The two thresholds were shown to be significantly different (P = 0.003). The results show that localized cooling causes significant vasoconstriction that continues beyond the active cooling period despite skin temperatures returning toward baseline values. The significant and persistent reduction in skin perfusion may contribute to nonfreezing cold injury (NFCI) associated with cryotherapy. PMID:26632263
Feng, Zhaozhong; Calatayud, Vicent; Zhu, Jianguo; Kobayashi, Kazuhiko
2018-04-01
Five winter wheat cultivars were exposed to ambient (A-O 3 ) and elevated (E-O 3 , 1.5 ambient) O 3 in a fully open-air fumigation system in China. Ozone exposure- and flux based response relationships were established for seven physiological variables related to photosynthesis. The performance of the fitting of the regressions in terms of R 2 increased when second order regressions instead of first order ones were used, suggesting that effects of O 3 were more pronounced towards the last developmental stages of the wheat. The more robust indicators were those related with CO 2 assimilation, Rubisco activity and RuBP regeneration capacity (A sat , J max and Vc max ), and chlorophyll content (Chl). Flux-based metrics (POD y , Phytotoxic O 3 Dose over a threshold ynmolO 3 m -2 s -1 ) predicted slightly better the responses to O 3 than exposure metrics (AOTX, Accumulated O 3 exposure over an hourly Threshold of X ppb) for most of the variables. The best performance was observed for metrics POD 1 ( A sat , J max and Vc max ) and POD 3 (Chl). For this crop, the proposed response functions could be used for O 3 risk assessment based on physiological effects and also to include the influence of O 3 on yield or other variables in models with a photosynthetic component. Copyright © 2017 Elsevier B.V. All rights reserved.
Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Miklavcic, Damijan
2016-01-01
Background Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Material and methods Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Results Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r2 = 0.79; p < 0.008, r2 = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. Conclusions The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup. PMID:27069447
Threshold effect under nonlinear limitation of the intensity of high-power light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, S A; Podgaetskii, V M; Gerasimenko, A Yu
2015-04-30
A model is proposed to describe the properties of limiters of high-power laser radiation, which takes into account the threshold character of nonlinear interaction of radiation with the working medium of the limiter. The generally accepted non-threshold model is a particular case of the threshold model if the threshold radiation intensity is zero. Experimental z-scan data are used to determine the nonlinear optical characteristics of media with carbon nanotubes, polymethine and pyran dyes, zinc selenide, porphyrin-graphene and fullerene-graphene. A threshold effect of nonlinear interaction between laser radiation and some of investigated working media of limiters is revealed. It is shownmore » that the threshold model more adequately describes experimental z-scan data. (nonlinear optical phenomena)« less
Peripheral neuropathy in military aircraft maintenance workers in Australia.
Guest, Maya; Attia, John R; D'este, Catherine A; Boggess, May M; Brown, Anthony M; Gibson, Richard E; Tavener, Meredith A; Ross, James; Gardner, Ian; Harrex, Warren
2011-04-01
This study aimed to examine possible persisting peripheral neuropathy in a group who undertook fuel tank repairs on F-111 aircraft, relative to two contemporaneous comparison groups. Vibration perception threshold (VPT) was tested using biothesiometry in 614 exposed personnel, compared with two unexposed groups (513 technical trades and 403 nontrades). Regression modeling was used to examine associations, adjusting for possible confounders. We observed that 26% of participants had chronic persistent increased VPT in the great toe. In contrast, statistically significant higher VPT of the great toe was observed in the comparison groups; however, the effect was small, about 1/4 the magnitude of diabetes. Age, height, and diabetes were all significant and strong predictors in most models. This study highlights chronic persisting peripheral neuropathy in a population of aircraft maintainers.
Temperature dependence of needle and shoot elongation before bud break in Scots pine.
Schiestl-Aalto, Pauliina; Mäkelä, Annikki
2017-03-01
Knowledge about the early part of needle growth is deficient compared with what is known about shoot growth. It is however important to understand growth of different organs to be able to estimate the changes in whole tree growth in a changing environment. The onset of growth in spring has been observed to occur over some certain threshold value of momentary temperature or temperature accumulation. We measured the length growth of Scots pine (Pinus sylvestris L.) needles and shoots from March until bud break over 3 years. We first compared needle growth with concurrent shoot growth. Then, we quantified threshold temperature of growth (i) with a logistic regression based on momentary temperatures and (ii) with the temperature sum accumulation method. Temperature sum was calculated with combinations of various time steps, starting dates and threshold temperature values. Needle elongation began almost concurrently with shoot elongation and proceeded linearly in relation to shoot growth until bud break. When studying the threshold temperature for growth, the method with momentary temperature effect on growth onset yielded ambiguous results in our conditions. The best fit of an exponential regression between needle growth or length and temperature sum was obtained with threshold temperatures -1 to +2 °C, with several combinations of starting date and time step. We conclude that although growth onset is a momentary event the process leading to it is a long-term continuum where past time temperatures have to be accounted for, rather than a sudden switch from quiescence to active growth. Further, our results indicate that lower temperatures than the commonly used +5 °C are sufficient for actuating the growth process. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
McCambridge, Jim; Kypri, Kypros; McElduff, Patrick
2014-02-01
Reductions in drinking among individuals randomised to control groups in brief alcohol intervention trials are common and suggest that asking study participants about their drinking may itself cause them to reduce their consumption. We sought to test the hypothesis that the statistical artefact regression to the mean (RTM) explains part of the reduction in such studies. 967 participants in a cohort study of alcohol consumption in New Zealand provided data at baseline and again six months later. We use graphical methods and apply thresholds of 8, 12, 16 and 20 in AUDIT scores to explore RTM. There was a negative association between baseline AUDIT scores and change in AUDIT scores from baseline to six months, which in the absence of bias and confounding, is RTM. Students with lower baseline scores tended to have higher follow-up scores and conversely, those with higher baseline scores tended to have lower follow-up scores. When a threshold score of 8 was used to select a subgroup, the observed mean change was approximately half of that observed without a threshold. The application of higher thresholds produced greater apparent reductions in alcohol consumption. Part of the reduction seen in the control groups of brief alcohol intervention trials is likely to be due to RTM and the amount of change is likely to be greater as the threshold for entry to the trial increases. Quantification of RTM warrants further study and should assist understanding assessment and other research participation effects. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Granovsky, Yelena; Matre, Dagfinn; Sokolik, Alexander; Lorenz, Jürgen; Casey, Kenneth L
2005-06-01
The human palm has a lower heat detection threshold and a higher heat pain threshold than hairy skin. Neurophysiological studies of monkeys suggest that glabrous skin has fewer low threshold heat nociceptors (AMH type 2) than hairy skin. Accordingly, we used a temperature-controlled contact heat evoked potential (CHEP) stimulator to excite selectively heat receptors with C fibers or Adelta-innervated AMH type 2 receptors in humans. On the dorsal hand, 51 degrees C stimulation produced painful pinprick sensations and 41 degrees C stimuli evoked warmth. On the glabrous thenar, 41 degrees C stimulation produced mild warmth and 51 degrees C evoked strong but painless heat sensations. We used CHEP responses to estimate the conduction velocities (CV) of peripheral fibers mediating these sensations. On hairy skin, 41 degrees C stimuli evoked an ultra-late potential (mean, SD; N wave latency: 455 (118) ms) mediated by C fibers (CV by regression analysis: 1.28 m/s, N=15) whereas 51 degrees C stimuli evoked a late potential (N latency: 267 (33) ms) mediated by Adelta afferents (CV by within-subject analysis: 12.9 m/s, N=6). In contrast, thenar responses to 41 and 51 degrees C were mediated by C fibers (average N wave latencies 485 (100) and 433 (73) ms, respectively; CVs 0.95-1.35 m/s by regression analysis, N=15; average CV=1.7 (0.41) m/s calculated from distal glabrous and proximal hairy skin stimulation, N=6). The exploratory range of the human and monkey palm is enhanced by the abundance of low threshold, C-innervated heat receptors and the paucity of low threshold AMH type 2 heat nociceptors.
Ni, Hsing-Chang; Gau, Susan Shur-Fen
2015-02-01
The extent to which parenting styles can influence secondary psychiatric symptoms among young adults with ADHD symptoms is unknown. This issue was investigated in a sample of 2284 incoming college students (male, 50.6%), who completed standardized questionnaires about adult ADHD symptoms, other DSM-IV symptoms, and their parents' parenting styles before their ages of 16. Among them, 2.8% and 22.8% were classified as having ADHD symptoms and sub-threshold ADHD symptoms, respectively. Logistic regression was used to compare the comorbid rates of psychiatric symptoms among the ADHD, sub-threshold ADHD and non-ADHD groups while multiple linear regressions were used to examine the moderating role of gender and parenting styles over the associations between ADHD and other psychiatric symptoms. Both ADHD groups were significantly more likely than other incoming students to have other DSM-IV symptoms. Parental care was negatively associated and parental overprotection/control positively associated with these psychiatric symptoms. Furthermore, significant interactions were found of parenting style with both threshold and sub-threshold ADHD in predicting wide-ranging comorbid symptoms. Specifically, the associations of ADHD with some externalizing symptoms were inversely related to level of paternal care, while associations of ADHD and sub-threshold ADHD with wide-ranging comorbid symptoms were positively related to level of maternal and paternal overprotection/control. These results suggest that parenting styles may modify the effects of ADHD on the risk of a wide range of temporally secondary DSM-IV symptoms among incoming college students, although other causal dynamics might be at work that need to be investigated in longitudinal studies. Copyright © 2014 Elsevier Inc. All rights reserved.
The role of NT-proBNP in explaining the variance in anaerobic threshold and VE/VCO(2) slope.
Athanasopoulos, Leonidas V; Dritsas, Athanasios; Doll, Helen A; Cokkinos, Dennis V
2011-01-01
We investigated whether anaerobic threshold (AT) and ventilatory efficiency (minute ventilation/carbon dioxide production slope, VE/VCO2 slope), both significantly associated with mortality, can be predicted by questionnaire scores and/or other laboratory measurements. Anaerobic threshold and VE/VCO(2) slope, plasma N-terminal pro-brain natriuretic peptide (NT-proBNP), and the echocardiographic markers left ventricular ejection fraction (LVEF) and left atrial (LA) diameter were measured in 62 patients with heart failure (HF), who also completed the Minnesota Living with Heart Failure Questionnaire (MLHF), and the Specific Activity Questionnaire (SAQ). Linear regression models, adjusting for age and gender, were fitted. While the etiology of HF, SAQ score, MLHF score, LVEF, LA diameter, and logNT-proBNP were each significantly predictive of both AT and VE/VCO2 slope on stepwise multiple linear regression, only SAQ score (P < .001) and logNT-proBNP (P = .001) were significantly predictive of AT, explaining 56% of the variability (adjusted R(2) = 0.525), while logNT-proBNP (P < .001) and etiology of HF (P = .003) were significantly predictive of VE/VCO(2) slope, explaining 49% of the variability (adjusted R(2) = 0.45). The area under the ROC curve for NT-proBNP to identify patients with a VE/VCO(2) slope greater than 34 and AT less than 11 mL · kg(-1) · min(-1) was 0.797; P < .001 and 0.712; P = .044, respectively. A plasma concentration greater than 429.5 pg/mL (sensitivity: 78%; specificity: 70%) and greater than 674.5 pg/mL (sensitivity: 77.8%; specificity: 65%) identified a VE/VCO(2) slope greater than 34 and AT lower than 11 mL · kg(-1) · min(-1), respectively. NT-proBNP is independently related to both AT and VE/VCO(2) slope. Specific Activity Questionnaire score is independently related only to AT and the etiology of HF only to VE/VCO(2) slope.
Kang, Deqiang; Hua, Haiqin; Peng, Nan; Zhao, Jing; Wang, Zhiqun
2017-04-01
We aim to improve the image quality of coronary computed tomography angiography (CCTA) by using personalized weight and height-dependent scan trigger threshold. This study was divided into two parts. First, we performed and analyzed the 100 scheduled CCTA data, which were acquired by using body mass index-dependent Smart Prep sequence (trigger threshold ranged from 80 Hu to 250 Hu based on body mass index). By identifying the cases of high quality image, a linear regression equation was established to determine the correlation among the Smart Prep threshold, height, and body weight. Furthermore, a quick search table was generated for weight and height-dependent Smart Prep threshold in CCTA scan. Second, to evaluate the effectiveness of the new individual threshold method, an additional 100 consecutive patients were divided into two groups: individualized group (n = 50) with weight and height-dependent threshold and control group (n = 50) with the conventional constant threshold of 150 HU. Image quality was compared between the two groups by measuring the enhancement in coronary artery, aorta, left and right ventricle, and inferior vena cava. By visual inspection, image quality scores were performed to compare between the two groups. Regression equation between Smart Prep threshold (K, Hu), height (H, cm), and body weight (BW, kg) was K = 0.811 × H + 1.917 × BW - 99.341. When compared to the control group, the individualized group presented an average overall increase of 12.30% in enhancement in left main coronary artery, 12.94% in proximal right coronary artery, and 10.6% in aorta. Correspondingly, the contrast-to-noise ratios increased by 26.03%, 27.08%, and 23.17%, respectively, and by 633.1% in contrast between aorta and left ventricle. Meanwhile, the individualized group showed an average overall decrease of 22.7% in enhancement of right ventricle and 32.7% in inferior vena cava. There was no significant difference of the image noise between the two groups (P > .05). By visual inspection, the image quality score of the individualized group was higher than that of the control group. Using personalized weight and height-dependent Smart Prep threshold to adjust scan trigger time can significantly improve the image quality of CCTA. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Tarabichi, Majd; Shohat, Noam; Kheir, Michael M; Adelani, Muyibat; Brigati, David; Kearns, Sean M; Patel, Pankajkumar; Clohisy, John C; Higuera, Carlos A; Levine, Brett R; Schwarzkopf, Ran; Parvizi, Javad; Jiranek, William A
2017-09-01
Although HbA1c is commonly used for assessing glycemic control before surgery, there is no consensus regarding its role and the appropriate threshold in predicting adverse outcomes. This study was designed to evaluate the potential link between HbA1c and subsequent periprosthetic joint infection (PJI), with the intention of determining the optimal threshold for HbA1c. This is a multicenter retrospective study, which identified 1645 diabetic patients who underwent primary total joint arthroplasty (1004 knees and 641 hips) between 2001 and 2015. All patients had an HbA1c measured within 3 months of surgery. The primary outcome of interest was a PJI at 1 year based on the Musculoskeletal Infection Society criteria. Secondary outcomes included orthopedic (wound and mechanical complications) and nonorthopedic complications (sepsis, thromboembolism, genitourinary, and cardiovascular complications). A regression analysis was performed to determine the independent influence of HbA1c for predicting PJI. Overall 22 cases of PJI occurred at 1 year (1.3%). HbA1c at a threshold of 7.7 was distinct for predicting PJI (area under the curve, 0.65; 95% confidence interval, 0.51-0.78). Using this threshold, PJI rates increased from 0.8% (11 of 1441) to 5.4% (11 of 204). In the stepwise logistic regression analysis, PJI remained the only variable associated with higher HbA1c (odds ratio, 1.5; confidence interval, 1.2-2.0; P = .0001). There was no association between high HbA1c levels and other complications assessed. High HbA1c levels are associated with an increased risk for PJI. A threshold of 7.7% seems to be more indicative of infection than the commonly used 7% and should perhaps be the goal in preoperative patient optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Golas, Sara Bersche; Shibahara, Takuma; Agboola, Stephen; Otaki, Hiroko; Sato, Jumpei; Nakae, Tatsuya; Hisamitsu, Toru; Kojima, Go; Felsted, Jennifer; Kakarmath, Sujay; Kvedar, Joseph; Jethwani, Kamal
2018-06-22
Heart failure is one of the leading causes of hospitalization in the United States. Advances in big data solutions allow for storage, management, and mining of large volumes of structured and semi-structured data, such as complex healthcare data. Applying these advances to complex healthcare data has led to the development of risk prediction models to help identify patients who would benefit most from disease management programs in an effort to reduce readmissions and healthcare cost, but the results of these efforts have been varied. The primary aim of this study was to develop a 30-day readmission risk prediction model for heart failure patients discharged from a hospital admission. We used longitudinal electronic medical record data of heart failure patients admitted within a large healthcare system. Feature vectors included structured demographic, utilization, and clinical data, as well as selected extracts of un-structured data from clinician-authored notes. The risk prediction model was developed using deep unified networks (DUNs), a new mesh-like network structure of deep learning designed to avoid over-fitting. The model was validated with 10-fold cross-validation and results compared to models based on logistic regression, gradient boosting, and maxout networks. Overall model performance was assessed using concordance statistic. We also selected a discrimination threshold based on maximum projected cost saving to the Partners Healthcare system. Data from 11,510 patients with 27,334 admissions and 6369 30-day readmissions were used to train the model. After data processing, the final model included 3512 variables. The DUNs model had the best performance after 10-fold cross-validation. AUCs for prediction models were 0.664 ± 0.015, 0.650 ± 0.011, 0.695 ± 0.016 and 0.705 ± 0.015 for logistic regression, gradient boosting, maxout networks, and DUNs respectively. The DUNs model had an accuracy of 76.4% at the classification threshold that corresponded with maximum cost saving to the hospital. Deep learning techniques performed better than other traditional techniques in developing this EMR-based prediction model for 30-day readmissions in heart failure patients. Such models can be used to identify heart failure patients with impending hospitalization, enabling care teams to target interventions at their most high-risk patients and improving overall clinical outcomes.
Risk appraisal of passing zones on two-lane rural highways and policy applications.
Mwesige, Godfrey; Farah, Haneen; Koutsopoulos, Haris N
2016-05-01
Passing on two-lane rural highways is associated with risks of head-on collision resulting from unsafe completion of passing maneuvers in the opposite traffic lane. In this paper, we explore the use of time-to-collision (TTC) as a surrogate safety measure of the risk associated with passing maneuvers. Logistic regression models to predict the probability to end the passing maneuver with TTC less than 2 or 3s-threshold were developed with the time-gap from initiation of the maneuver to arrival of the opposite vehicle (effective accepted gap), and the passing duration as explanatory variables. The data used for model estimation was collected using stationary tripod-mounted camcorders at 19 passing zones in Uganda. Results showed that passing maneuvers completed with TTC less than 3s are unsafe and often involved sudden speed reduction, flashing headlights, and lateral shift to shoulders. Model sensitivity analysis was conducted for observed passing durations involving passenger cars or short trucks (2-3 axles), and long trucks (4-7 axles) as the passed vehicles for 3s TTC-threshold. Three risk levels were proposed based on the probability to complete passing maneuvers with TTC less than 3s for a range of opposite direction traffic volumes. Applications of the results for safety improvements of two-lane rural highways are also discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Qi, Cong; Gu, Yiyang; Sun, Qing; Gu, Hongliang; Xu, Bo; Gu, Qing; Xiao, Jing; Lian, Yulong
2017-05-01
We assessed the risk of liver injuries following low doses of N,N-dimethylformamide (DMF) below threshold limit values (20 mg/m) among leather industry workers and comparison groups. A cohort of 429 workers from a leather factory and 466 non-exposed subjects in China were followed for 4 years. Poisson regression and piece-wise linear regression were used to examine the relationship between DMF and liver injury. Workers exposed to a cumulative dose of DMF were significantly more likely than non-exposed workers to develop liver injury. A nonlinear relationship between DMF and liver injury was observed, and a threshold of the cumulative DMF dose for liver injury was 7.30 (mg/m) year. The findings indicate the importance of taking action to reduce DMF occupational exposure limits for promoting worker health.
A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)
2017-10-01
TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY
Assessing models of arsenic occurrence in drinking water from bedrock aquifers in New Hampshire
Andy, Caroline; Fahnestock, Maria Florencia; Lombard, Melissa; Hayes, Laura; Bryce, Julie; Ayotte, Joseph
2017-01-01
Three existing multivariate logistic regression models were assessed using new data to evaluate the capacity of the models to correctly predict the probability of groundwater arsenic concentrations exceeding the threshold values of 1, 5, and 10 micrograms per liter (µg/L) in New Hampshire, USA. A recently released testing dataset includes arsenic concentrations from groundwater samples collected in 2004–2005 from a mix of 367 public-supply and private domestic wells. The use of this dataset to test three existing logistic regression models demonstrated enhanced overall predictive accuracy for the 5 and 10 μg/L models. Overall accuracies of 54.8, 76.3, and 86.4 percent were reported for the 1, 5, and 10 μg/L models, respectively. The state was divided by counties into northwest and southeast regions. Regional differences in accuracy were identified; models had an average accuracy of 83.1 percent for the counties in the northwest and 63.7 percent in the southeast. This is most likely due to high model specificity in the northwest and regional differences in arsenic occurrence. Though these models have limitations, they allow for arsenic hazard assessment across the region. The introduction of well-type (public or private), well depth, and casing length as explanatory variables may be appropriate measures to improve model performance. Our findings indicate that the original models generalize to the testing dataset, and should continue to serve as an important vehicle of preventative public health that may be applied to other groundwater contaminants in New Hampshire.
Effects of fatigue on motor unit firing rate versus recruitment threshold relationships.
Stock, Matt S; Beck, Travis W; Defreitas, Jason M
2012-01-01
The purpose of this study was to examine the influence of fatigue on the average firing rate versus recruitment threshold relationships for the vastus lateralis (VL) and vastus medialis. Nineteen subjects performed ten maximum voluntary contractions of the dominant leg extensors. Before and after this fatiguing protocol, the subjects performed a trapezoid isometric muscle action of the leg extensors, and bipolar surface electromyographic signals were detected from both muscles. These signals were then decomposed into individual motor unit action potential trains. For each subject and muscle, the relationship between average firing rate and recruitment threshold was examined using linear regression analyses. For the VL, the linear slope coefficients and y-intercepts for these relationships increased and decreased, respectively, after fatigue. For both muscles, many of the motor units decreased their firing rates. With fatigue, recruitment of higher threshold motor units resulted in an increase in slope for the VL. Copyright © 2011 Wiley Periodicals, Inc.
Stefani, Luciana Cadore; Muller, Suzana; Torres, Iraci L. S.; Razzolini, Bruna; Rozisky, Joanna R.; Fregni, Felipe; Markus, Regina; Caumo, Wolnei
2013-01-01
Background Previous studies have suggested that melatonin may produce antinociception through peripheral and central mechanisms. Based on the preliminary encouraging results of studies of the effects of melatonin on pain modulation, the important question has been raised of whether there is a dose relationship in humans of melatonin on pain modulation. Objective The objective was to evaluate the analgesic dose response of the effects of melatonin on pressure and heat pain threshold and tolerance and the sedative effects. Methods Sixty-one healthy subjects aged 19 to 47 y were randomized into one of four groups: placebo, 0.05 mg/kg sublingual melatonin, 0.15 mg/kg sublingual melatonin or 0.25 mg/kg sublingual melatonin. We determine the pressure pain threshold (PPT) and the pressure pain tolerance (PPTo). Quantitative sensory testing (QST) was used to measure the heat pain threshold (HPT) and the heat pain tolerance (HPTo). Sedation was assessed with a visual analogue scale and bispectral analysis. Results Serum plasma melatonin levels were directly proportional to the melatonin doses given to each subject. We observed a significant effect associated with dose group. Post hoc analysis indicated significant differences between the placebo vs. the intermediate (0.15 mg/kg) and the highest (0.25 mg/kg) melatonin doses for all pain threshold and sedation level tests. A linear regression model indicated a significant association between the serum melatonin concentrations and changes in pain threshold and pain tolerance (R2 = 0.492 for HPT, R2 = 0.538 for PPT, R2 = 0.558 for HPTo and R2 = 0.584 for PPTo). Conclusions The present data indicate that sublingual melatonin exerts well-defined dose-dependent antinociceptive activity. There is a correlation between the plasma melatonin drug concentration and acute changes in the pain threshold. These results provide additional support for the investigation of melatonin as an analgesic agent. Brazilian Clinical Trials Registry (ReBec): (U1111-1123-5109). IRB: Research Ethics Committee at the Hospital de Clínicas de Porto Alegre. PMID:25947930
Effects of Frequency Drift on the Quantification of Gamma-Aminobutyric Acid Using MEGA-PRESS
NASA Astrophysics Data System (ADS)
Tsai, Shang-Yueh; Fang, Chun-Hao; Wu, Thai-Yu; Lin, Yi-Ru
2016-04-01
The MEGA-PRESS method is the most common method used to measure γ-aminobutyric acid (GABA) in the brain at 3T. It has been shown that the underestimation of the GABA signal due to B0 drift up to 1.22 Hz/min can be reduced by post-frequency alignment. In this study, we show that the underestimation of GABA can still occur even with post frequency alignment when the B0 drift is up to 3.93 Hz/min. The underestimation can be reduced by applying a frequency shift threshold. A total of 23 subjects were scanned twice to assess the short-term reproducibility, and 14 of them were scanned again after 2-8 weeks to evaluate the long-term reproducibility. A linear regression analysis of the quantified GABA versus the frequency shift showed a negative correlation (P < 0.01). Underestimation of the GABA signal was found. When a frequency shift threshold of 0.125 ppm (15.5 Hz or 1.79 Hz/min) was applied, the linear regression showed no statistically significant difference (P > 0.05). Therefore, a frequency shift threshold at 0.125 ppm (15.5 Hz) can be used to reduce underestimation during GABA quantification. For data with a B0 drift up to 3.93 Hz/min, the coefficients of variance of short-term and long-term reproducibility for the GABA quantification were less than 10% when the frequency threshold was applied.
Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.
Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang
2005-09-01
Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and anthropometric parameters, and this provides a foundation for investigating the neurobiologic significance of aging on the processing of sensory stimuli.
Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.
Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo
2018-05-01
This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and established risk factors, potentially representing an important step forward in the translation of quantitative CMR perfusion analysis to the clinical setting. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Hanauer, D.A.
2014-01-01
Summary Background Patient no-shows in outpatient delivery systems remain problematic. The negative impacts include underutilized medical resources, increased healthcare costs, decreased access to care, and reduced clinic efficiency and provider productivity. Objective To develop an evidence-based predictive model for patient no-shows, and thus improve overbooking approaches in outpatient settings to reduce the negative impact of no-shows. Methods Ten years of retrospective data were extracted from a scheduling system and an electronic health record system from a single general pediatrics clinic, consisting of 7,988 distinct patients and 104,799 visits along with variables regarding appointment characteristics, patient demographics, and insurance information. Descriptive statistics were used to explore the impact of variables on show or no-show status. Logistic regression was used to develop a no-show predictive model, which was then used to construct an algorithm to determine the no-show threshold that calculates a predicted show/no-show status. This approach aims to overbook an appointment where a scheduled patient is predicted to be a no-show. The approach was compared with two commonly-used overbooking approaches to demonstrate the effectiveness in terms of patient wait time, physician idle time, overtime and total cost. Results From the training dataset, the optimal error rate is 10.6% with a no-show threshold being 0.74. This threshold successfully predicts the validation dataset with an error rate of 13.9%. The proposed overbooking approach demonstrated a significant reduction of at least 6% on patient waiting, 27% on overtime, and 3% on total costs compared to other common flat-overbooking methods. Conclusions This paper demonstrates an alternative way to accommodate overbooking, accounting for the prediction of an individual patient’s show/no-show status. The predictive no-show model leads to a dynamic overbooking policy that could improve patient waiting, overtime, and total costs in a clinic day while maintaining a full scheduling capacity. PMID:25298821
White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia
2015-01-01
Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Rueda, Marta; Moreno Saiz, Juan Carlos; Morales-Castilla, Ignacio; Albuquerque, Fabio S; Ferrero, Mila; Rodríguez, Miguel Á
2015-01-01
Ecological theory predicts that fragmentation aggravates the effects of habitat loss, yet empirical results show mixed evidences, which fail to support the theory instead reinforcing the primary importance of habitat loss. Fragmentation hypotheses have received much attention due to their potential implications for biodiversity conservation, however, animal studies have traditionally been their main focus. Here we assess variation in species sensitivity to forest amount and fragmentation and evaluate if fragmentation is related to extinction thresholds in forest understory herbs and ferns. Our expectation was that forest herbs would be more sensitive to fragmentation than ferns due to their lower dispersal capabilities. Using forest cover percentage and the proportion of this percentage occurring in the largest patch within UTM cells of 10-km resolution covering Peninsular Spain, we partitioned the effects of forest amount versus fragmentation and applied logistic regression to model occurrences of 16 species. For nine models showing robustness according to a set of quality criteria we subsequently defined two empirical fragmentation scenarios, minimum and maximum, and quantified species' sensitivity to forest contraction with no fragmentation, and to fragmentation under constant forest cover. We finally assessed how the extinction threshold of each species (the habitat amount below which it cannot persist) varies under no and maximum fragmentation. Consistent with their preference for forest habitats probability occurrences of all species decreased as forest cover contracted. On average, herbs did not show significant sensitivity to fragmentation whereas ferns were favored. In line with theory, fragmentation yielded higher extinction thresholds for two species. For the remaining species, fragmentation had either positive or non-significant effects. We interpret these differences as reflecting species-specific traits and conclude that although forest amount is of primary importance for the persistence of understory plants, to neglect the impact of fragmentation for some species can lead them to local extinction.
Effects of Airgun Sounds on Bowhead Whale Calling Rates: Evidence for Two Behavioral Thresholds
Blackwell, Susanna B.; Nations, Christopher S.; McDonald, Trent L.; Thode, Aaron M.; Mathias, Delphine; Kim, Katherine H.; Greene, Charles R.; Macrander, A. Michael
2015-01-01
In proximity to seismic operations, bowhead whales (Balaena mysticetus) decrease their calling rates. Here, we investigate the transition from normal calling behavior to decreased calling and identify two threshold levels of received sound from airgun pulses at which calling behavior changes. Data were collected in August–October 2007–2010, during the westward autumn migration in the Alaskan Beaufort Sea. Up to 40 directional acoustic recorders (DASARs) were deployed at five sites offshore of the Alaskan North Slope. Using triangulation, whale calls localized within 2 km of each DASAR were identified and tallied every 10 minutes each season, so that the detected call rate could be interpreted as the actual call production rate. Moreover, airgun pulses were identified on each DASAR, analyzed, and a cumulative sound exposure level was computed for each 10-min period each season (CSEL10-min). A Poisson regression model was used to examine the relationship between the received CSEL10-min from airguns and the number of detected bowhead calls. Calling rates increased as soon as airgun pulses were detectable, compared to calling rates in the absence of airgun pulses. After the initial increase, calling rates leveled off at a received CSEL10-min of ~94 dB re 1 μPa2-s (the lower threshold). In contrast, once CSEL10-min exceeded ~127 dB re 1 μPa2-s (the upper threshold), whale calling rates began decreasing, and when CSEL10-min values were above ~160 dB re 1 μPa2-s, the whales were virtually silent. PMID:26039218
Effects of airgun sounds on bowhead whale calling rates: evidence for two behavioral thresholds.
Blackwell, Susanna B; Nations, Christopher S; McDonald, Trent L; Thode, Aaron M; Mathias, Delphine; Kim, Katherine H; Greene, Charles R; Macrander, A Michael
2015-01-01
In proximity to seismic operations, bowhead whales (Balaena mysticetus) decrease their calling rates. Here, we investigate the transition from normal calling behavior to decreased calling and identify two threshold levels of received sound from airgun pulses at which calling behavior changes. Data were collected in August-October 2007-2010, during the westward autumn migration in the Alaskan Beaufort Sea. Up to 40 directional acoustic recorders (DASARs) were deployed at five sites offshore of the Alaskan North Slope. Using triangulation, whale calls localized within 2 km of each DASAR were identified and tallied every 10 minutes each season, so that the detected call rate could be interpreted as the actual call production rate. Moreover, airgun pulses were identified on each DASAR, analyzed, and a cumulative sound exposure level was computed for each 10-min period each season (CSEL10-min). A Poisson regression model was used to examine the relationship between the received CSEL10-min from airguns and the number of detected bowhead calls. Calling rates increased as soon as airgun pulses were detectable, compared to calling rates in the absence of airgun pulses. After the initial increase, calling rates leveled off at a received CSEL10-min of ~94 dB re 1 μPa2-s (the lower threshold). In contrast, once CSEL10-min exceeded ~127 dB re 1 μPa2-s (the upper threshold), whale calling rates began decreasing, and when CSEL10-min values were above ~160 dB re 1 μPa2-s, the whales were virtually silent.
Chang, Mun Young; Rah, Yoon Chan; Choi, Jun Jae; Woo, Shin Wook; Hwang, Yu-Jung; Eastwood, Hayden; O'Leary, Stephen J; Lee, Jun Ho
2017-08-01
When administered perioperatively, systemic dexamethasone will reduce the hearing loss associated with cochlear implantation (CI) performed via the round window approach. The benefits of electroacoustic stimulation have led to interest in pharmacological interventions to preserve hearing after CI. Thirty guinea pigs were randomly divided into three experimental groups: a control group; a 3-day infusion group; and a 7-day infusion group. Dexamethasone was delivered via a mini-osmotic pump for either 3 or 7 days after CI via the round window. Pure tone-evoked auditory brainstem response (ABR) thresholds were monitored for a period of 12 weeks after CI. The cochleae were then collected for histology. At 4 and 12 weeks after CI, ABR threshold shifts were significantly reduced in both 7-day and 3-day infusion groups compared with the control group. Furthermore, the 7-day infusion group has significantly reduced ABR threshold shifts compared with the 3-day infusion group. The total tissue response, including fibrosis and ossification, was significantly reduced in the 7-day infusion group compared with the control group. On multiple regression the extent of fibrosis predicted hearing loss across most frequencies, while hair cell counts predicted ABR thresholds at 32 kHz. Hearing protection after systemic administration of steroids is more effective when continued for at least a week after CI. Similarly, this treatment approach was more effective in reducing the fibrosis that encapsulates the CI electrode. Reduced fibrosis seemed to be the most likely explanation for the hearing protection.
Kasztelan-Szczerbinska, Beata; Slomka, Maria; Celinski, Krzysztof; Szczerbinski, Mariusz
2013-01-01
Determination of risk factors relevant to 90-day prognosis in AH. Comparison of the conventional prognostic models such as Maddrey's modified discriminant function (mDF) and Child-Pugh-Turcotte (CPT) score with newer ones: the Glasgow Alcoholic Hepatitis Score (GAHS); Age, Bilirubin, INR, Creatinine (ABIC) score, Model for End-Stage Liver Disease (MELD), and MELD-Na in the death prediction. The clinical and laboratory variables obtained at admission were assessed. The mDF, CPT, GAHS, ABIC, MELD, and MELD-Na scores' different areas under the curve (AUCs) and the best threshold values were compared. Logistic regression was used to assess predictors of the 90-day outcome. One hundred sixteen pts fulfilled the inclusion criteria. Twenty (17.4%) pts died and one underwent orthotopic liver transplantation (OLT) within 90 days of follow-up. No statistically significant differences in the models' performances were found. Multivariate logistic regression identified CPT score, alkaline phosphatase (AP) level higher than 1.5 times the upper limit of normal (ULN), and corticosteroids (CS) nonresponse as independent predictors of mortality. The CPT score, AP > 1.5 ULN, and the CS nonresponse had an independent impact on the 90-day survival in AH. Accuracy of all studied scoring systems was comparable.
Validation of a temperature prediction model for heat deaths in undocumented border crossers.
Ruttan, Tim; Stolz, Uwe; Jackson-Vance, Sara; Parks, Bruce; Keim, Samuel M
2013-04-01
Heat exposure is a leading cause of death in undocumented border crossers along the Arizona-Mexico border. We performed a validation study of a weather prediction model that predicts the probability of heat related deaths among undocumented border crossers. We analyzed a medical examiner registry cohort of undocumented border crosser heat- related deaths from January 1, 2002 to August 31, 2009 and used logistic regression to model the probability of one or more heat deaths on a given day using daily high temperature (DHT) as the predictor. At a critical threshold DHT of 40 °C, the probability of at least one heat death was 50 %. The probability of a heat death along the Arizona-Mexico border for suspected undocumented border crossers is strongly associated with ambient temperature. These results can be used in prevention and response efforts to assess the daily risk of deaths among undocumented border crossers in the region.
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Derouin, F.; Garin, Y. J.; Buffard, C.; Berthelot, F.; Petithory, J. C.
1994-01-01
A collaborative study conducted by the French National Agency for Quality Control in Parasitology (CNQP) and various manufacturers of ELISA kits, represented by the Association of Laboratory Reagent Manufacturers (SFRL) compared the toxoplasmosis IgG antibody titres obtained with different ELISA-IgG kits and determined the relationships between the titres obtained by these techniques and the titre defined in international units (IU). Fifty-one serum samples with toxoplasmosis antibody titres ranging from 0 to 900 IU were tested in two successive studies with 16 ELISA-IgG kits. For the negative sera, false-positive reactions were observed with one kit. For the positive sera, the titres observed in ELISA were generally higher than those expressed in IU. Above 250 IU, the very wide variability of the titres found with the different ELISA kits renders any comparative analysis impossible. For titres below 250 IU, the results are sufficiently homogeneous to permit the use of regression analysis to study how the results for each ELISA kit compare with the mean results for the other kits. The slope of the line of regression shows a tendency to over-titration or under-titration compared with the results of the other manufacturers; the ordinate at the origin reflects the positivity threshold of the reaction and can be used to assess the risk of a lack of sensitivity (high threshold) or of specificity (threshold too low). On the whole, the trends revealed for a given manufacturer are constant from one study to the other. Within this range of titres, regression analysis also reveals the general tendency of ELISA kits to overestimate the titres by comparison with immunofluorescence.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8205645
Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics
Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong
2016-01-01
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the ‘general separatrix’ in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability. PMID:27546614
Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics.
Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong
2016-08-22
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the 'general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods
NASA Astrophysics Data System (ADS)
Jakubowski, Jacek
2014-12-01
The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.
Revisiting the 'Low BirthWeight paradox' using a model-based definition.
Juárez, Sol; Ploubidis, George B; Clarke, Lynda
2014-01-01
Immigrant mothers in Spain have a lower risk of delivering Low BirthWeight (LBW) babies in comparison to Spaniards (LBW paradox). This study aimed at revisiting this finding by applying a model-based threshold as an alternative to the conventional definition of LBW. Vital information data from Madrid was used (2005-2006). LBW was defined in two ways (less than 2500g and Wilcox's proposal). Logistic and linear regression models were run. According to common definition of LBW (less than 2500g) there is evidence to support the LBW paradox in Spain. Nevertheless, when an alternative model-based definition of LBW is used, the paradox is only clearly present in mothers from the rest of Southern America, suggesting a possible methodological bias effect. In the future, any examination of the existence of the LBW paradox should incorporate model-based definitions of LBW in order to avoid methodological bias. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Predicting the dynamics of ascospore maturation of Venturia pirina based on environmental factors.
Rossi, V; Salinari, F; Pattori, E; Giosuè, S; Bugiani, R
2009-04-01
Airborne ascospores of Venturia pirina were trapped at two sites in northern Italy in 2002 to 2008. The cumulative proportion of ascospores trapped at each discharge was regressed against the physiological time. The best fit (R(2) = 0.90, standard error of estimates [SEest] = 0.11) was obtained using a Gompertz equation and the degree-days (>0 degrees C) accumulated after the day on which the first ascospore of the season was trapped (biofix day), but only for the days with > or =0.2 mm rain or < or =4 hPa vapor pressure deficit (DDwet). This Italian model performed better than the models developed in Oregon, United States (R(2) = 0.69, SEest = 0.16) or Victoria, Australia (R(2) = 0.74, SEest = 0.18), which consider only the effect of temperature. When the Italian model was evaluated against data not used in its elaboration, it accurately predicted ascospore maturation (R(2) = 0.92, SEest = 0.10). A logistic regression model was also developed to estimate the biofix for initiating the accumulation of degree-days (biofix model). The probability of the first ascospore discharge of the season increased as DDwet (calculated from 1 January) increased. Based on this model, there is low probability of the first ascospore discharge when DDwet < or =268.5 (P = 0.03) and high probability (P = 0.83) of discharge on the first day with >0.2 mm rain after such a DDwet threshold.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.
Bili, Eleni; Bili, Authors Eleni; Dampala, Kaliopi; Iakovou, Ioannis; Tsolakidis, Dimitrios; Giannakou, Anastasia; Tarlatzis, Basil C
2014-08-01
The aim of this study was to determine the performance of prostate specific antigen (PSA) and ultrasound parameters, such as ovarian volume and outline, in the diagnosis of polycystic ovary syndrome (PCOS). This prospective, observational, case-controlled study included 43 women with PCOS, and 40 controls. Between day 3 and 5 of the menstrual cycle, fasting serum samples were collected and transvaginal ultrasound was performed. The diagnostic performance of each parameter [total PSA (tPSA), total-to-free PSA ratio (tPSA:fPSA), ovarian volume, ovarian outline] was estimated by means of receiver operating characteristic (ROC) analysis, along with area under the curve (AUC), threshold, sensitivity, specificity as well as positive (+) and negative (-) likelihood ratios (LRs). Multivariate logistical regression models, using ovarian volume and ovarian outline, were constructed. The tPSA and tPSA:fPSA ratio resulted in AUC of 0.74 and 0.70, respectively, with moderate specificity/sensitivity and insufficient LR+/- values. In the multivariate logistic regression model, the combination of ovarian volume and outline had a sensitivity of 97.7% and a specificity of 97.5% in the diagnosis of PCOS, with +LR and -LR values of 39.1 and 0.02, respectively. In women with PCOS, tPSA and tPSA:fPSA ratio have similar diagnostic performance. The use of a multivariate logistic regression model, incorporating ovarian volume and outline, offers very good diagnostic accuracy in distinguishing women with PCOS patients from controls. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
LaMotte, A.E.; Greene, E.A.
2007-01-01
Spatial relations between land use and groundwater quality in the watershed adjacent to Assateague Island National Seashore, Maryland and Virginia, USA were analyzed by the use of two spatial models. One model used a logit analysis and the other was based on geostatistics. The models were developed and compared on the basis of existing concentrations of nitrate as nitrogen in samples from 529 domestic wells. The models were applied to produce spatial probability maps that show areas in the watershed where concentrations of nitrate in groundwater are likely to exceed a predetermined management threshold value. Maps of the watershed generated by logistic regression and probability kriging analysis showing where the probability of nitrate concentrations would exceed 3 mg/L (>0.50) compared favorably. Logistic regression was less dependent on the spatial distribution of sampled wells, and identified an additional high probability area within the watershed that was missed by probability kriging. The spatial probability maps could be used to determine the natural or anthropogenic factors that best explain the occurrence and distribution of elevated concentrations of nitrate (or other constituents) in shallow groundwater. This information can be used by local land-use planners, ecologists, and managers to protect water supplies and identify land-use planning solutions and monitoring programs in vulnerable areas. ?? 2006 Springer-Verlag.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903
Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O
2014-09-01
The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
Stapedotomy in osteogenesis imperfecta: a prospective study of 32 consecutive cases.
Vincent, Robert; Wegner, Inge; Stegeman, Inge; Grolman, Wilko
2014-12-01
To prospectively evaluate hearing outcomes in patients with osteogenesis imperfecta undergoing primary stapes surgery and to isolate prognostic factors for success. A nonrandomized, open, prospective case series. A tertiary referral center. Twenty-five consecutive patients who underwent 32 primary stapedotomies for osteogenesis imperfecta with evidence of stapes fixation and available postoperative pure-tone audiometry. Primary stapedotomy with vein graft interposition and reconstruction with a regular Teflon piston or bucket handle-type piston. Preoperative and postoperative audiometric evaluation using conventional 4-frequency (0.5, 1, 2, and 4 kHz) audiometry. Air-conduction thresholds, bone-conduction thresholds, and air-bone gap were measured. The overall audiometric results as well as the results of audiometric evaluation at 3 months and at least 1 year after surgery were used. Overall, postoperative air-bone gap closure to within 10 dB was achieved in 88% of cases. Mean (standard deviation) gain in air-conduction threshold was 22 (9.4) dB for the entire case series, and mean (standard deviation) air-bone gap closure was 22 (9.0) dB. Backward multivariate logistic regression showed that a model with preoperative air-bone gap closure and intraoperatively established incus length accurately predicts success after primary stapes surgery. Stapes surgery is a feasible and safe treatment option in patients with osteogenesis imperfecta. Success is associated with preoperative air-bone gap and intraoperatively established incus length.
Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-02-01
To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.
McGee, John Christopher; Wilson, Eric; Barela, Haley; Blum, Sharon
2017-03-01
Air Liaison Officer Aptitude Assessment (AAA) attrition is often associated with a lack of candidate physical preparation. The Functional Movement Screen, Tactical Fitness Assessment, and fitness metrics were collected (n = 29 candidates) to determine what physical factors could predict a candidate s success in completing AAA. Between-group comparisons were made between candidates completing AAA versus those who did not (p < 0.05). Upper 50% thresholds were established for all variables with R 2 < 0.8 and the data were converted to a binary form (0 = did not attain threshold, 1 = attained threshold). Odds-ratios, pre/post-test probabilities and positive likelihood ratios were computed and logistic regression applied to explain model variance. The following variables provided the most predictive value for AAA completion: Pull-ups (p = 0.01), Sit-ups (p = 0.002), Relative Powerball Toss (p = 0.017), and Pull-ups × Sit-ups interaction (p = 0.016). Minimum recommended guidelines for AAA screening are Pull-ups (10 maximum), Sit-ups (76/2 minutes), and a Relative Powerball Toss of 0.6980 ft × lb/BW. Associated benefits could be higher graduation rates, and a cost-savings associated from temporary duty and possible injury care for nonselected candidates. Recommended guidelines should be validated in future class cycles. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Johannesen, Peter T.; Pérez-González, Patricia; Kalluri, Sridhar; Blanco, José L.
2016-01-01
The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN) and a time-reversed two-talker masker (R2TM). Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a) cochlear gain loss was unrelated to intelligibility, (b) residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c) temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d) age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers. PMID:27604779
Modelling the regulatory system for diabetes mellitus with a threshold window
NASA Astrophysics Data System (ADS)
Yang, Jin; Tang, Sanyi; Cheke, Robert A.
2015-05-01
Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.
A generalized methodology for identification of threshold for HRU delineation in SWAT model
NASA Astrophysics Data System (ADS)
M, J.; Sudheer, K.; Chaubey, I.; Raj, C.
2016-12-01
The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation. The methodology can be adopted for identifying an appropriate threshold for SWAT model simulation in any watershed with a single simulation of the model with a zero-zero threshold.
Patterns of threshold evolution in polyphenic insects under different developmental models.
Tomkins, Joseph L; Moczek, Armin P
2009-02-01
Two hypotheses address the evolution of polyphenic traits in insects. Under the developmental reprogramming model, individuals exceeding a threshold follow a different developmental pathway from individuals below the threshold. This decoupling is thought to free selection to independently hone alternative morphologies, increasing phenotypic plasticity and morphological diversity. Under the alternative model, extreme positive allometry explains the existence of alternative phenotypes and divergent phenotypes are developmentally coupled by a continuous reaction norm, such that selection on either morph acts on both. We test the hypothesis that continuous reaction norm polyphenisms, evolve through changes in the allometric parameters of even the smallest males with minimal trait expression, whereas threshold polyphenisms evolve independent of the allometric parameters of individuals below the threshold. We compare two polyphenic species; the dung beetle Onthophagus taurus, whose allometry has been modeled both as a threshold polyphenism and a continuous reaction norm and the earwig Forficula auricularia, whose allometry is best modeled with a discontinuous threshold. We find that across populations of both species, variation in forceps or horn allometry in minor males are correlated to the population's threshold. These findings suggest that regardless of developmental mode, alternative morphs do not evolve independently of one another.
Wei, Wenjia; Heinze, Stefanie; Gerstner, Doris G; Walser, Sandra M; Twardella, Dorothee; Reiter, Christina; Weilnhammer, Veronika; Perez-Alvarez, Carmelo; Steffens, Thomas; Herr, Caroline E W
2017-01-01
Studies investigating leisure noise effect on extended high frequency hearing are insufficient and they have inconsistent results. The aim of this study was to investigate if extended high-frequency hearing threshold shift is related to audiometric notch, and if total leisure noise exposure is associated with extended high-frequency hearing threshold shift. A questionnaire of the Ohrkan cohort study was used to collect information on demographics and leisure time activities. Conventional and extended high-frequency audiometry was performed. We did logistic regression between extended high-frequency hearing threshold shift and audiometric notch as well as between total leisure noise exposure and extended high-frequency hearing threshold shift. Potential confounders (sex, school type, and firecrackers) were included. Data from 278 participants (aged 18-23 years, 53.2% female) were analyzed. Associations between hearing threshold shift at 10, 11.2, 12.5, and 14 kHz with audiometric notch were observed with a higher prevalence of threshold shift at the four frequencies, compared to the notch. However, we found no associations between total leisure noise exposure and hearing threshold shift at any extended high frequency. This exploratory analysis suggests that while extended high-frequency hearing threshold shifts are not related to total leisure noise exposure, they are strongly associated with audiometric notch. This leads us to further explore the hypothesis that extended high-frequency threshold shift might be indicative of the appearance of audiometric notch at a later time point, which can be investigated in the future follow-ups of the Ohrkan cohort.
Ryason, Adam; Sankaranarayanan, Ganesh; Butler, Kathryn L; DeMoya, Marc; De, Suvranu
2016-08-01
Emergency Cricothyroidotomy (CCT) is a surgical procedure performed to secure a patient's airway. This high-stakes, but seldom-performed procedure is an ideal candidate for a virtual reality simulator to enhance physician training. For the first time, this study characterizes the force/torque characteristics of the cricothyroidotomy procedure, to guide development of a virtual reality CCT simulator for use in medical training. We analyze the upper force and torque thresholds experienced at the human-scalpel interface. We then group individual surgical cuts based on style of cut and cut medium and perform a regression analysis to create two models that allow us to predict the style of cut performed and the cut medium.
Variable-Threshold Threshold Elements,
A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the
Lazzeri, Massimo; Haese, Alexander; Abrate, Alberto; de la Taille, Alexandre; Redorta, Joan Palou; McNicholas, Thomas; Lughezzani, Giovanni; Lista, Giuliana; Larcher, Alessandro; Bini, Vittorio; Cestari, Andrea; Buffi, Nicolòmaria; Graefen, Markus; Bosset, Olivier; Le Corvoisier, Philippe; Breda, Alberto; de la Torre, Pablo; Fowler, Linda; Roux, Jacques; Guazzoni, Giorgio
2013-08-01
To test the sensitivity, specificity and accuracy of serum prostate-specific antigen isoform [-2]proPSA (p2PSA), %p2PSA and the prostate health index (PHI), in men with a family history of prostate cancer (PCa) undergoing prostate biopsy for suspected PCa. To evaluate the potential reduction in unnecessary biopsies and the characteristics of potentially missed cases of PCa that would result from using serum p2PSA, %p2PSA and PHI. The analysis consisted of a nested case-control study from the PRO-PSA Multicentric European Study, the PROMEtheuS project. All patients had a first-degree relative (father, brother, son) with PCa. Multivariable logistic regression models were complemented by predictive accuracy analysis and decision-curve analysis. Of the 1026 patients included in the PROMEtheuS cohort, 158 (15.4%) had a first-degree relative with PCa. p2PSA, %p2PSA and PHI values were significantly higher (P < 0.001), and free/total PSA (%fPSA) values significantly lower (P < 0.001) in the 71 patients with PCa (44.9%) than in patients without PCa. Univariable accuracy analysis showed %p2PSA (area under the receiver-operating characteristic curve [AUC]: 0.733) and PHI (AUC: 0.733) to be the most accurate predictors of PCa at biopsy, significantly outperforming total PSA ([tPSA] AUC: 0.549), free PSA ([fPSA] AUC: 0.489) and %fPSA (AUC: 0.600) (P ≤ 0.001). For %p2PSA a threshold of 1.66 was found to have the best balance between sensitivity and specificity (70.4 and 70.1%; 95% confidence interval [CI]: 58.4-80.7 and 59.4-79.5 respectively). A PHI threshold of 40 was found to have the best balance between sensitivity and specificity (64.8 and 71.3%, respectively; 95% CI 52.5-75.8 and 60.6-80.5). At 90% sensitivity, the thresholds for %p2PSA and PHI were 1.20 and 25.5, with a specificity of 37.9 and 25.5%, respectively. At a %p2PSA threshold of 1.20, a total of 39 (24.8%) biopsies could have been avoided, but two cancers with a Gleason score (GS) of 7 would have been missed. At a PHI threshold of 25.5 a total of 27 (17.2%) biopsies could have been avoided and two (3.8%) cancers with a GS of 7 would have been missed. In multivariable logistic regression models, %p2PSA and PHI achieved independent predictor status and significantly increased the accuracy of multivariable models including PSA and prostate volume by 8.7 and 10%, respectively (P ≤ 0.001). p2PSA, %p2PSA and PHI were directly correlated with Gleason score (ρ: 0.247, P = 0.038; ρ: 0.366, P = 0.002; ρ: 0.464, P < 0.001, respectively). %p2PSA and PHI are more accurate than tPSA, fPSA and %fPSA in predicting PCa in men with a family history of PCa. Consideration of %p2PSA and PHI results in the avoidance of several unnecessary biopsies. p2PSA, %p2PSA and PHI correlate with cancer aggressiveness. © 2013 BJU International.
Rafiq, Muhammad T; Aziz, Rukhsanda; Yang, Xiaoe; Xiao, Wendan; Rafiq, Muhammad K; Ali, Basharat; Li, Tingqiang
2014-05-01
Food chain contamination by cadmium (Cd) is globally a serious health concern resulting in chronic abnormalities. Rice is a major staple food of the majority world population, therefore, it is imperative to understand the relationship between the bioavailability of Cd in soils and its accumulation in rice grain. Objectives of this study were to establish environment quality standards for seven different textured soils based on human dietary toxicity, total Cd content in soils and bioavailable portion of Cd in soil. Cadmium concentrations in polished rice grain were best related to total Cd content in Mollisols and Udic Ferrisols with threshold levels of 0.77 and 0.32mgkg(-1), respectively. Contrastingly, Mehlich-3-extractable Cd thresholds were more suitable for Calcaric Regosols, Stagnic Anthrosols, Ustic Cambosols, Typic Haplustalfs and Periudic Argosols with thresholds values of 0.36, 0.22, 0.17, 0.08 and 0.03mgkg(-1), respectively. Stepwise multiple regression analysis indicated that phytoavailability of Cd to rice grain was strongly correlated with Mehlich-3-extractable Cd and soil pH. The empirical model developed in this study explains the combined effects of soil properties and extractable soil Cd content on the phytoavailability of Cd to polished rice grain. This study indicates that accumulation of Cd in rice is influenced greatly by soil type, which should be considered in assessment of soil safety for Cd contamination in rice. This investigation concluded that the selection of proper soil type for food crop production can help us to avoid the toxicity of Cd in our daily diet. © 2013 Elsevier Inc. All rights reserved.
Strotmeyer, Elsa S; de Rekeneire, Nathalie; Schwartz, Ann V; Resnick, Helaine E; Goodpaster, Bret H; Faulkner, Kimberly A; Shorr, Ronald I; Vinik, Aaron I; Harris, Tamara B; Newman, Anne B
2009-11-01
To determine whether sensory and motor nerve function is associated cross-sectionally with quadriceps or ankle dorsiflexion strength in an older community-based population. Cross-sectional analyses within a longitudinal cohort study. Two U.S. clinical sites. Two thousand fifty-nine Health, Aging and Body Composition Study (Health ABC) participants (49.5% male, 36.7% black, aged 73-82) in 2000/01. Quadriceps and ankle strength were measured using an isokinetic dynamometer. Sensory and motor peripheral nerve function in the legs and feet was assessed using 10-g and 1.4-g monofilaments, vibration threshold, and peroneal motor nerve conduction amplitude and velocity. Monofilament insensitivity, poorest vibration threshold quartile (>60 mu), and poorest motor nerve conduction amplitude quartile (<1.7 mV) were associated with 11%, 7%, and 8% lower quadriceps strength (all P<.01), respectively, than in the best peripheral nerve function categories in adjusted linear regression models. Monofilament insensitivity and lowest amplitude quartile were both associated with 17% lower ankle strength (P<.01). Multivariate analyses were adjusted for demographic characteristics, diabetes mellitus, body composition, lifestyle factors, and chronic health conditions and included all peripheral nerve measures in the same model. Monofilament insensitivity (beta=-7.19), vibration threshold (beta=-0.097), and motor nerve conduction amplitude (beta=2.01) each contributed independently to lower quadriceps strength (all P<.01). Monofilament insensitivity (beta=-5.29) and amplitude (beta=1.17) each contributed independently to lower ankle strength (all P<.01). Neither diabetes mellitus status nor lean mass explained the associations between peripheral nerve function and strength. Reduced sensory and motor peripheral nerve function is related to poorer lower extremity strength in older adults, suggesting a mechanism for the relationship with lower extremity disability.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
Effects of Frequency Drift on the Quantification of Gamma-Aminobutyric Acid Using MEGA-PRESS
Tsai, Shang-Yueh; Fang, Chun-Hao; Wu, Thai-Yu; Lin, Yi-Ru
2016-01-01
The MEGA-PRESS method is the most common method used to measure γ-aminobutyric acid (GABA) in the brain at 3T. It has been shown that the underestimation of the GABA signal due to B0 drift up to 1.22 Hz/min can be reduced by post-frequency alignment. In this study, we show that the underestimation of GABA can still occur even with post frequency alignment when the B0 drift is up to 3.93 Hz/min. The underestimation can be reduced by applying a frequency shift threshold. A total of 23 subjects were scanned twice to assess the short-term reproducibility, and 14 of them were scanned again after 2–8 weeks to evaluate the long-term reproducibility. A linear regression analysis of the quantified GABA versus the frequency shift showed a negative correlation (P < 0.01). Underestimation of the GABA signal was found. When a frequency shift threshold of 0.125 ppm (15.5 Hz or 1.79 Hz/min) was applied, the linear regression showed no statistically significant difference (P > 0.05). Therefore, a frequency shift threshold at 0.125 ppm (15.5 Hz) can be used to reduce underestimation during GABA quantification. For data with a B0 drift up to 3.93 Hz/min, the coefficients of variance of short-term and long-term reproducibility for the GABA quantification were less than 10% when the frequency threshold was applied. PMID:27079873
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
Estimation of elimination half-lives of organic chemicals in humans using gradient boosting machine.
Lu, Jing; Lu, Dong; Zhang, Xiaochen; Bi, Yi; Cheng, Keguang; Zheng, Mingyue; Luo, Xiaomin
2016-11-01
Elimination half-life is an important pharmacokinetic parameter that determines exposure duration to approach steady state of drugs and regulates drug administration. The experimental evaluation of half-life is time-consuming and costly. Thus, it is attractive to build an accurate prediction model for half-life. In this study, several machine learning methods, including gradient boosting machine (GBM), support vector regressions (RBF-SVR and Linear-SVR), local lazy regression (LLR), SA, SR, and GP, were employed to build high-quality prediction models. Two strategies of building consensus models were explored to improve the accuracy of prediction. Moreover, the applicability domains (ADs) of the models were determined by using the distance-based threshold. Among seven individual models, GBM showed the best performance (R(2)=0.820 and RMSE=0.555 for the test set), and Linear-SVR produced the inferior prediction accuracy (R(2)=0.738 and RMSE=0.672). The use of distance-based ADs effectively determined the scope of QSAR models. However, the consensus models by combing the individual models could not improve the prediction performance. Some essential descriptors relevant to half-life were identified and analyzed. An accurate prediction model for elimination half-life was built by GBM, which was superior to the reference model (R(2)=0.723 and RMSE=0.698). Encouraged by the promising results, we expect that the GBM model for elimination half-life would have potential applications for the early pharmacokinetic evaluations, and provide guidance for designing drug candidates with favorable in vivo exposure profile. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.
Fernández, Leónides; Mediano, Pilar; García, Ricardo; Rodríguez, Juan M; Marín, María
2016-09-01
Objectives Lactational mastitis frequently leads to a premature abandonment of breastfeeding; its development has been associated with several risk factors. This study aims to use a decision tree (DT) approach to establish the main risk factors involved in mastitis and to compare its performance for predicting this condition with a stepwise logistic regression (LR) model. Methods Data from 368 cases (breastfeeding women with mastitis) and 148 controls were collected by a questionnaire about risk factors related to medical history of mother and infant, pregnancy, delivery, postpartum, and breastfeeding practices. The performance of the DT and LR analyses was compared using the area under the receiver operating characteristic (ROC) curve. Sensitivity, specificity and accuracy of both models were calculated. Results Cracked nipples, antibiotics and antifungal drugs during breastfeeding, infant age, breast pumps, familial history of mastitis and throat infection were significant risk factors associated with mastitis in both analyses. Bottle-feeding and milk supply were related to mastitis for certain subgroups in the DT model. The areas under the ROC curves were similar for LR and DT models (0.870 and 0.835, respectively). The LR model had better classification accuracy and sensitivity than the DT model, but the last one presented better specificity at the optimal threshold of each curve. Conclusions The DT and LR models constitute useful and complementary analytical tools to assess the risk of lactational infectious mastitis. The DT approach identifies high-risk subpopulations that need specific mastitis prevention programs and, therefore, it could be used to make the most of public health resources.
Nonlinear and threshold-dominated runoff generation controls DOC export in a small peat catchment
NASA Astrophysics Data System (ADS)
Birkel, C.; Broder, T.; Biester, H.
2017-03-01
We used a relatively simple two-layer, coupled hydrology-biogeochemistry model to simultaneously simulate streamflow and stream dissolved organic carbon (DOC) concentrations in a small lead and arsenic contaminated upland peat catchment in northwestern Germany. The model procedure was informed by an initial data mining analysis, in combination with regression relationships of discharge, DOC, and element export. We assessed the internal model DOC processing based on stream DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data for two consecutive summer periods in 2013 and 2014. The parsimonious model (i.e., few calibrated parameters) showed the importance of nonlinear and rapid near-surface runoff generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time—as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity but exported less DOC (115 kg C ha-1 yr-1 ± 11 kg C ha-1 yr-1) compared to the equivalent but wetter period in 2014 (189 kg C ha-1 yr-1 ± 38 kg C ha-1 yr-1). The exceedance of a critical water table threshold (-10 cm) triggered a rapid near-surface runoff response with associated higher DOC transport connecting all available DOC pools and subsequent dilution. We conclude that the combination of detailed experimental work with relatively simple, coupled hydrology-biogeochemistry models not only allowed the model to be internally constrained but also provided important insight into how DOC and tightly coupled pollutants or trace elements are mobilized.
Hung, Kenneth; Gralla, Jane; Dodge, Jennifer L; Bambha, Kiran M; Dirchwolf, Melisa; Rosen, Hugo R; Biggins, Scott W
2015-11-01
Repeat liver transplantation (LT) is controversial because of inferior outcomes versus primary LT. A minimum 1-year expected post-re-LT survival of 50% has been proposed. We aimed to identify combinations of Model for End-Stage Liver Disease (MELD), donor risk index (DRI), and recipient characteristics achieving this graft survival threshold. We identified re-LT recipients listed in the United States from March 2002 to January 2010 with > 90 days between primary LT and listing for re-LT. Using Cox regression, we estimated the expected probability of 1-year graft survival and identified combinations of MELD, DRI, and recipient characteristics attaining >50% expected 1-year graft survival. Re-LT recipients (n = 1418) had a median MELD of 26 and median age of 52 years. Expected 1-year graft survival exceeded 50% regardless of MELD or DRI in Caucasian recipients who were not infected with hepatitis C virus (HCV) of all ages and Caucasian HCV-infected recipients <50 years old. As age increased in HCV-infected Caucasian and non-HCV-infected African American recipients, lower MELD scores or lower DRI grafts were needed to attain the graft survival threshold. As MELD scores increased in HCV-infected African American recipients, lower-DRI livers were required to achieve the graft survival threshold. Use of high-DRI livers (>1.44) in HCV-infected recipients with a MELD score > 26 at re-LT failed to achieve the graft survival threshold with recipient age ≥ 60 years (any race), as well as at age ≥ 50 years for Caucasians and at age < 50 years for African Americans. Strategic donor selection can achieve >50% expected 1-year graft survival even in high-risk re-LT recipients (HCV infected, older age, African American race, high MELD scores). Low-risk transplant recipients (age < 50 years, non-HCV-infected) can achieve the survival threshold with varying DRI and MELD scores. © 2015 American Association for the Study of Liver Diseases.
Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R
2014-09-01
Discordance between small aortic valve area (AVA; < 1.0 cm(2)) and low mean pressure gradient (MPG; < 40 mm Hg) affects a third of patients with moderate or severe aortic stenosis (AS). We hypothesized that this is largely due to inaccurate echocardiographic measurements of the left ventricular outflow tract area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
Dusing, Reginald W.; Peng, Warner; Lai, Sue-Min; Grado, Gordon L.; Holzbeierlein, Jeffrey M.; Thrasher, J. Brantley; Hill, Jacqueline; Van Veldhuizen, Peter J.
2014-01-01
Purpose The aim of this study was to identify which patient characteristics are associated with the highest likelihood of positive findings on 11C-acetate PET/computed tomography attenuation correction (CTAC) (PET/CTAC) scan when imaging for recurrent prostate cancer. Methods From 2007 to 2011, 250 11C-acetate PET/CTAC scans were performed at a single institution on patients with prostate cancer recurrence after surgery, brachytherapy, or external beam radiation. Of these patients, 120 met our inclusion criteria. Logistic regression analysis was used to examine the relationship between predictability of positive findings and patients’ characteristics, such as prostate-specific antigen (PSA) level at the time of scan, PSA kinetics, Gleason score, staging, and type of treatment before scan. Results In total, 68.3% of the 120 11C-acetate PET/CTAC scans were positive. The percentage of positive scans and PSA at the time of scanning and PSA velocity (PSAV) had positive correlations. The putative sensitivity and specificity were 86.6% and 65.8%, respectively, when a PSA level greater than 1.24 ng/mL was used as the threshold for scanning. The putative sensitivity and specificity were 74% and 75%, respectively, when a PSAV level greater than 1.32 ng/mL/y was used as the threshold. No significant associations were found between scan positivity and age, PSA doubling time, Gleason score, staging, or type of treatment before scanning. Conclusions This retrospective study suggests that threshold models of PSA greater than 1.24 ng/mL or PSAV greater than 1.32 ng/mL per year are independent predictors of positive findings in 11C-acetate PET/CTAC imaging of recurrent prostate cancer. PMID:25036021
Muscle Weakness Thresholds for Prediction of Diabetes in Adults.
Peterson, Mark D; Zhang, Peng; Choksi, Palak; Markides, Kyriakos S; Al Snih, Soham
2016-05-01
Despite the known links between weakness and early mortality, what remains to be fully understood is the extent to which strength preservation is associated with protection from cardiometabolic diseases, such as diabetes. The purposes of this study were to determine the association between muscle strength and diabetes among adults, and to identify age- and sex-specific thresholds of low strength for detection of risk. A population-representative sample of 4066 individuals, aged 20-85 years, was included from the combined 2011-2012 National Health and Nutrition Examination Survey (NHANES) data sets. Strength was assessed using a handheld dynamometer, and the single highest reading from either hand was normalized to body mass. A logistic regression model was used to assess the association between normalized grip strength and risk of diabetes, as determined by haemoglobin A1c levels ≥6.5 % (≥48 mmol/mol), while controlling for sociodemographic characteristics, anthropometric measures and television viewing time. For every 0.05 decrement in normalized strength, there were 1.26 times increased adjusted odds for diabetes in men and women. Women were at lower odds of having diabetes (odds ratio 0.49; 95 % confidence interval 0.29-0.82). Age, waist circumference and lower income were also associated with diabetes. The optimal sex- and age-specific weakness thresholds to detect diabetes were 0.56, 0.50 and 0.45 for men at ages of 20-39, 40-59 and 60-80 years, respectively, and 0.42, 0.38 and 0.33 for women at ages of 20-39, 40-59 and 60-80 years, respectively. We present thresholds of strength that can be incorporated into a clinical setting for identifying adults who are at risk of developing diabetes and might benefit from lifestyle interventions to reduce risk.
Muscle Weakness Thresholds for Prediction of Diabetes in Adults
Peterson, Mark D.; Zhang, Peng; Choksi, Palak; Markides, Kyriakos S.; Al Snih, Soham
2016-01-01
Background Despite the known links between weakness and early mortality, what remains to be fully understood is the extent to which strength preservation is associated with protection from cardiometabolic diseases such as diabetes. Purpose The purposes of this study were to determine the association between muscle strength and diabetes among adults, and to identify age- and sex-specific thresholds of low strength for detection of risk. Methods A population-representative sample of 4,066 individuals, aged 20–85 years, was included from the combined 2011–2012 National Health and Nutrition Examination Survey datasets. Strength was assessed using a hand-held dynamometer, and the single largest reading from either hand was normalized to body mass. A logistic regression model was used to assess the association between normalized grip strength and risk of diabetes, as determined by hemoglobin A1c (HbA1c) levels (≥6.5% [≥48 mmol/mol]), while controlling for sociodemographic characteristics, anthropometric measures, and television viewing time. Results For every 0.05 decrement in normalized strength, there was a 1.26 times increased adjusted odds for diabetes in men and women. Women were at lower odds of having diabetes (OR: 0.49; 95% CI: 0.29–0.82), whereas age, waist circumference and lower income were inversely associated. Optimal sex- and age-specific weakness thresholds to detect diabetes were 0.56, 0.50, and 0.45 for men, and 0.42, 0.38, and 0.33 for women, for ages 20–39 years, 40–59 years, and 60–80 years. Conclusions and Clinical Relevance We present thresholds of strength that can be incorporated into a clinical setting for identifying adults that are at risk for developing diabetes, and that might benefit from lifestyle interventions to reduce risk. PMID:26744337
Ferguson, Sue A.; Allread, W. Gary; Burr, Deborah L.; Heaney, Catherine; Marras, William S.
2013-01-01
Background Biomechanical, psychosocial and individual risk factors for low back disorder have been studied extensively however few researchers have examined all three risk factors. The objective of this was to develop a low back disorder risk model in furniture distribution workers using biomechanical, psychosocial and individual risk factors. Methods This was a prospective study with a six month follow-up time. There were 454 subjects at 9 furniture distribution facilities enrolled in the study. Biomechanical exposure was evaluated using the American Conference of Governmental Industrial Hygienists (2001) lifting threshold limit values for low back injury risk. Psychosocial and individual risk factors were evaluated via questionnaires. Low back health functional status was measured using the lumbar motion monitor. Low back disorder cases were defined as a loss of low back functional performance of −0.14 or more. Findings There were 92 cases of meaningful loss in low back functional performance and 185 non cases. A multivariate logistic regression model included baseline functional performance probability, facility, perceived workload, intermediated reach distance number of exertions above threshold limit values, job tenure manual material handling, and age combined to provide a model sensitivity of 68.5% and specificity of 71.9%. Interpretation: The results of this study indicate which biomechanical, individual and psychosocial risk factors are important as well as how much of each risk factor is too much resulting in increased risk of low back disorder among furniture distribution workers. PMID:21955915
Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-04-13
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.
All-cause mortality in asymptomatic persons with extensive Agatston scores above 1000.
Patel, Jaideep; Blaha, Michael J; McEvoy, John W; Qadir, Sadia; Tota-Maharaj, Rajesh; Shaw, Leslee J; Rumberger, John A; Callister, Tracy Q; Berman, Daniel S; Min, James K; Raggi, Paolo; Agatston, Arthur A; Blumenthal, Roger S; Budoff, Matthew J; Nasir, Khurram
2014-01-01
Risk assessment in the extensive calcified plaque phenotype has been limited by small sample size. We studied all-cause mortality rates among asymptomatic patients with markedly elevated Agatston scores > 1000. We studied a clinical cohort of 44,052 asymptomatic patients referred for coronary calcium scans. Mean follow-up was 5.6 years (range, 1-13 years). All-cause mortality rates were calculated after stratifying by Agatston score (0, 1-1000, 1001-1500, 1500-2000, and >2000). A multivariable Cox regression model adjusting for self-reported traditional risk factors was created to assess the relative mortality hazard of Agatston scores 1001 to 1500, 1501 to 2000, and >2000. With the use of post-estimation modeling, we assessed for the presence of an upper threshold of risk with high Agatston scores. A total of 1593 patients (4% of total population) had Agatston score > 1000. There was a continuous graded decrease in estimated 10-year survival across increasing Agatston score, continuing when Agatston score > 1000 (Agatston score 1001-1500, 78%; Agatston score 1501-2000, 74%; Agatston score > 2000, 51%). After multivariable adjustment, Agatston scores 1001 to 1500, 1501 to 2000, and >2000 were associated with an 8.05-, 7.45-, and 13.26-fold greater mortality risk, respectively, than for Agatston score of 0. Compared with Agatston score 1001 to 1500, Agatston score 1501 to 2000 had a similar all-cause mortality risk, whereas Agatston score > 2000 had an increased relative risk (Agatston score 1501-2000: hazard ratio [HR], 1.01 [95% CI, 0.67-1.51]; Agatston score > 2000: HR, 1.79 [95% CI, 1.30-2.46]). Graphical assessment of the predicted survival model suggests no upper threshold for risk associated with calcified plaque in coronary arteries. Increasing calcified plaque in coronary arteries continues to predict a graded decrease in survival among patients with extensive Agatston score > 1000 with no apparent upper threshold. Published by Elsevier Inc.
Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-01-01
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836
Hendrickson-Rebizant, J; Sigvaldason, H; Nason, R W; Pathak, K A
2015-08-01
Age is integrated in most risk stratification systems for well-differentiated thyroid cancer (WDTC). The most appropriate age threshold for stage grouping of WDTC is debatable. The objective of this study was to evaluate the best age threshold for stage grouping by comparing multivariable models designed to evaluate the independent impact of various prognostic factors, including age based stage grouping, on the disease specific survival (DSS) of our population-based cohort. Data from population-based thyroid cancer cohort of 2125 consecutive WDTC, diagnosed during 1970-2010, with a median follow-up of 11.5 years, was used to calculate DSS using the Kaplan Meier method. Multivariable analysis with Cox proportional hazard model was used to assess independent impact of different prognostic factors on DSS. The Akaike information criterion (AIC), a measure of statistical model fit, was used to identify the most appropriate age threshold model. Delta AIC, Akaike weight, and evidence ratios were calculated to compare the relative strength of different models. The mean age of the patients was 47.3 years. DSS of the cohort was 95.6% and 92.8% at 10 and 20 years respectively. A threshold of 55 years, with the lowest AIC, was identified as the best model. Akaike weight indicated an 85% chance that this age threshold is the best among the compared models, and is 16.8 times more likely to be the best model as compared to a threshold of 45 years. The age threshold of 55 years was found to be the best for TNM stage grouping. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
Meister, Hartmut; Rählmann, Sebastian; Walger, Martin; Margolf-Hackl, Sabine; Kießling, Jürgen
2015-01-01
To examine the association of cognitive function, age, and hearing loss with clinically assessed hearing aid benefit in older hearing-impaired persons. Hearing aid benefit was assessed using objective measures regarding speech recognition in quiet and noisy environments as well as a subjective measure reflecting everyday situations captured using a standardized questionnaire. A broad range of general cognitive functions such as attention, memory, and intelligence were determined using different neuropsychological tests. Linear regression analyses were conducted with the outcome of the neuropsychological tests as well as age and hearing loss as independent variables and the benefit measures as dependent variables. Thirty experienced older hearing aid users with typical age-related hearing impairment participated. Most of the benefit measures revealed that the participants obtained significant improvement with their hearing aids. Regression models showed a significant relationship between a fluid intelligence measure and objective hearing aid benefit. When individual hearing thresholds were considered as an additional independent variable, hearing loss was the only significant contributor to the benefit models. Lower cognitive capacity - as determined by the fluid intelligence measure - was significantly associated with greater hearing loss. Subjective benefit could not be predicted by any of the variables considered. The present study does not give evidence that hearing aid benefit is critically associated with cognitive function in experienced hearing aid users. However, it was found that lower fluid intelligence scores were related to higher hearing thresholds. Since greater hearing loss was associated with a greater objective benefit, these results strongly support the advice of using hearing aids regardless of age and cognitive function to counter hearing loss and the adverse effects of age-related hearing impairment. Still, individual cognitive capacity might be relevant for hearing aid benefit during an initial phase of hearing aid provision if acclimatization has not yet taken place.
Schubert, Maria; Clarke, Sean P; Glass, Tracy R; Schaffert-Witvliet, Bianca; De Geest, Sabina
2009-07-01
In the Rationing of Nursing Care in Switzerland Study, implicit rationing of care was the only factor consistently significantly associated with all six studied patient outcomes. These results highlight the importance of rationing as a new system factor regarding patient safety and quality of care. Since at least some rationing of care appears inevitable, it is important to identify the thresholds of its influences in order to minimize its negative effects on patient outcomes. To describe the levels of implicit rationing of nursing care in a sample of Swiss acute care hospitals and to identify clinically meaningful thresholds of rationing. Descriptive cross-sectional multi-center study. Five Swiss-German and three Swiss-French acute care hospitals. 1338 nurses and 779 patients. Implicit rationing of nursing care was measured using the newly developed Basel Extent of Rationing of Nursing Care (BERNCA) instrument. Other variables were measured using survey items from the International Hospital Outcomes Study battery. Data were summarized using appropriate descriptive measures, and logistic regression models were used to define a clinically meaningful rationing threshold level. For the studied patient outcomes, identified rationing threshold levels varied from 0.5 (i.e., between 0 ('never') and 1 ('rarely') to 2 ('sometimes')). Three of the identified patient outcomes (nosocomial infections, pressure ulcers, and patient satisfaction) were particularly sensitive to rationing, showing negative consequences anywhere it was consistently reported (i.e., average BERNCA scores of 0.5 or above). In other cases, increases in negative outcomes were first observed from the level of 1 (average ratings of rarely). Rationing scores generated using the BERNCA instrument provide a clinically meaningful method for tracking the correlates of low resources or difficulties in resource allocation on patient outcomes. Thresholds identified here provide parameters for administrators to respond to whenever rationing reports exceed the determined level of '0.5' or '1'. Since even very low levels of rationing had negative consequences on three of the six studied outcomes, it is advisable to treat consistent evidence of any rationing as a significant threat to patient safety and quality of care.
Approaches to Identify Exceedances of Water Quality Thresholds Associated with Ocean Conditions
WED scientists have developed a method to help distinguish whether failures to meet water quality criteria are associated with natural coastal upwelling by using the statistical approach of logistic regression. Estuaries along the west coast of the United States periodically ha...
Rainfall thresholds for the initiation of debris flows at La Honda, California
Wilson, R.C.; Wieczorek, G.F.
1995-01-01
A simple numerical model, based on the physical analogy of a leaky barrel, can simulate significant features of the interaction between rainfall and shallow-hillslope pore pressures. The leaky-barrel-model threshold is consistent with, but slightly higher than, an earlier, purely empirical, threshold. The number of debris flows triggered by a storm can be related to the time and amount by which the leaky-barrel-model response exceeded the threshold during the storm. -from Authors
Two-threshold model for scaling laws of noninteracting snow avalanches
Faillettaz, J.; Louchet, F.; Grasso, J.-R.
2004-01-01
A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.
Wei, Wenjia; Heinze, Stefanie; Gerstner, Doris G.; Walser, Sandra M.; Twardella, Dorothee; Reiter, Christina; Weilnhammer, Veronika; Perez-Alvarez, Carmelo; Steffens, Thomas; Herr, Caroline E.W.
2017-01-01
Background: Studies investigating leisure noise effect on extended high frequency hearing are insufficient and they have inconsistent results. The aim of this study was to investigate if extended high-frequency hearing threshold shift is related to audiometric notch, and if total leisure noise exposure is associated with extended high-frequency hearing threshold shift. Materials and Methods: A questionnaire of the Ohrkan cohort study was used to collect information on demographics and leisure time activities. Conventional and extended high-frequency audiometry was performed. We did logistic regression between extended high-frequency hearing threshold shift and audiometric notch as well as between total leisure noise exposure and extended high-frequency hearing threshold shift. Potential confounders (sex, school type, and firecrackers) were included. Results: Data from 278 participants (aged 18–23 years, 53.2% female) were analyzed. Associations between hearing threshold shift at 10, 11.2, 12.5, and 14 kHz with audiometric notch were observed with a higher prevalence of threshold shift at the four frequencies, compared to the notch. However, we found no associations between total leisure noise exposure and hearing threshold shift at any extended high frequency. Conclusion: This exploratory analysis suggests that while extended high-frequency hearing threshold shifts are not related to total leisure noise exposure, they are strongly associated with audiometric notch. This leads us to further explore the hypothesis that extended high-frequency threshold shift might be indicative of the appearance of audiometric notch at a later time point, which can be investigated in the future follow-ups of the Ohrkan cohort. PMID:29319010
Liu, Zhihua; Yang, Jian; He, Hong S.
2013-01-01
The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, M.P.; Rouvray, D.H.
The propensity of hydrocarbons to form soot in a diffusion flame is correlated here for the first time against various topological indices. Two of the indices, the hydrogen deficiency index, and the Balaban distance-sum connectivity index were found to be especially valuable for correlational purposes. For a total of 98 hydrocarbon fuel moelcules, of differing types, regression analyses yielded good correlations between the threshold soot indices (TSIs) for diffusion flames and these two indices. An equation that can be used to estimate TSI values in fuel molecules is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, M.P.; Rouvray, D.H.
The propensity of hydrocarbons to form soot in a diffusion flame is correlated here for the first time against various topological indices. Two of the indices, the hydrogen deficiency index and the Balaban distance sum connectivity index, were found to be especially valuable for correlational purposes. For the total of 98 hydrocarbon fuel molecules of differing types, regression analyses yielded good correlations between the threshold soot indices (TSIs) for diffusion flames and these two indices. An equation which can be used to estimate TSI values in fuel molecules is presented.
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Roeder, William P.
2010-01-01
The expected peak wind speed for the day is an important element in the daily morning forecast for ground and space launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron (45 WS) must issue forecast advisories for KSC/CCAFS when they expect peak gusts for >= 25, >= 35, and >= 50 kt thresholds at any level from the surface to 300 ft. In Phase I of this task, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a cool-season (October - April) tool to help forecast the non-convective peak wind from the surface to 300 ft at KSC/CCAFS. During the warm season, these wind speeds are rarely exceeded except during convective winds or under the influence of tropical cyclones, for which other techniques are already in use. The tool used single and multiple linear regression equations to predict the peak wind from the morning sounding. The forecaster manually entered several observed sounding parameters into a Microsoft Excel graphical user interface (GUI), and then the tool displayed the forecast peak wind speed, average wind speed at the time of the peak wind, the timing of the peak wind and the probability the peak wind will meet or exceed 35, 50 and 60 kt. The 45 WS customers later dropped the requirement for >= 60 kt wind warnings. During Phase II of this task, the AMU expanded the period of record (POR) by six years to increase the number of observations used to create the forecast equations. A large number of possible predictors were evaluated from archived soundings, including inversion depth and strength, low-level wind shear, mixing height, temperature lapse rate and winds from the surface to 3000 ft. Each day in the POR was stratified in a number of ways, such as by low-level wind direction, synoptic weather pattern, precipitation and Bulk Richardson number. The most accurate Phase II equations were then selected for an independent verification. The Phase I and II forecast methods were compared using an independent verification data set. The two methods were compared to climatology, wind warnings and advisories issued by the 45 WS, and North American Mesoscale (NAM) model (MesoNAM) forecast winds. The performance of the Phase I and II methods were similar with respect to mean absolute error. Since the Phase I data were not stratified by precipitation, this method's peak wind forecasts had a large negative bias on days with precipitation and a small positive bias on days with no precipitation. Overall, the climatology methods performed the worst while the MesoNAM performed the best. Since the MesoNAM winds were the most accurate in the comparison, the final version of the tool was based on the MesoNAM winds. The probability the peak wind will meet or exceed the warning thresholds were based on the one standard deviation error bars from the linear regression. For example, the linear regression might forecast the most likely peak speed to be 35 kt and the error bars used to calculate that the probability of >= 25 kt = 76%, the probability of >= 35 kt = 50%, and the probability of >= 50 kt = 19%. The authors have not seen this application of linear regression error bars in any other meteorological applications. Although probability forecast tools should usually be developed with logistic regression, this technique could be easily generalized to any linear regression forecast tool to estimate the probability of exceeding any desired threshold . This could be useful for previously developed linear regression forecast tools or new forecast applications where statistical analysis software to perform logistic regression is not available. The tool was delivered in two formats - a Microsoft Excel GUI and a Tool Command Language/Tool Kit (Tcl/Tk) GUI in the Meteorological Interactive Data Display System (MIDDS). The Microsoft Excel GUI reads a MesoNAM text file containing hourly forecasts from 0 to 84 hours, from one model run (00 or 12 UTC). The GUI then displays e peak wind speed, average wind speed, and the probability the peak wind will meet or exceed the 25-, 35- and 50-kt thresholds. The user can display the Day-1 through Day-3 peak wind forecasts, and separate forecasts are made for precipitation and non-precipitation days. The MIDDS GUI uses data from the NAM and Global Forecast System (GFS), instead of the MesoNAM. It can display Day-1 and Day-2 forecasts using NAM data, and Day-1 through Day-5 forecasts using GFS data. The timing of the peak wind is not displayed, since the independent verification showed that none of the forecast methods performed significantly better than climatology. The forecaster should use the climatological timing of the peak wind (2248 UTC) as a first guess and then adjust it based on the movement of weather features.
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Evidence for the contribution of a threshold retrieval process to semantic memory.
Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A
2017-10-01
It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.
Data driven modeling of plastic deformation
Versino, Daniele; Tonda, Alberto; Bronkhorst, Curt A.
2017-05-01
In this paper the application of machine learning techniques for the development of constitutive material models is being investigated. A flow stress model, for strain rates ranging from 10 –4 to 10 12 (quasi-static to highly dynamic), and temperatures ranging from room temperature to over 1000 K, is obtained by beginning directly with experimental stress-strain data for Copper. An incrementally objective and fully implicit time integration scheme is employed to integrate the hypo-elastic constitutive model, which is then implemented into a finite element code for evaluation. Accuracy and performance of the flow stress models derived from symbolic regression are assessedmore » by comparison to Taylor anvil impact data. The results obtained with the free-form constitutive material model are compared to well-established strength models such as the Preston-Tonks-Wallace (PTW) model and the Mechanical Threshold Stress (MTS) model. Here, preliminary results show candidate free-form models comparing well with data in regions of stress-strain space with sufficient experimental data, pointing to a potential means for both rapid prototyping in future model development, as well as the use of machine learning in capturing more data as a guide for more advanced model development.« less
Data driven modeling of plastic deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Tonda, Alberto; Bronkhorst, Curt A.
In this paper the application of machine learning techniques for the development of constitutive material models is being investigated. A flow stress model, for strain rates ranging from 10 –4 to 10 12 (quasi-static to highly dynamic), and temperatures ranging from room temperature to over 1000 K, is obtained by beginning directly with experimental stress-strain data for Copper. An incrementally objective and fully implicit time integration scheme is employed to integrate the hypo-elastic constitutive model, which is then implemented into a finite element code for evaluation. Accuracy and performance of the flow stress models derived from symbolic regression are assessedmore » by comparison to Taylor anvil impact data. The results obtained with the free-form constitutive material model are compared to well-established strength models such as the Preston-Tonks-Wallace (PTW) model and the Mechanical Threshold Stress (MTS) model. Here, preliminary results show candidate free-form models comparing well with data in regions of stress-strain space with sufficient experimental data, pointing to a potential means for both rapid prototyping in future model development, as well as the use of machine learning in capturing more data as a guide for more advanced model development.« less
Two-level structural sparsity regularization for identifying lattices and defects in noisy images
Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...
2018-03-09
Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less
Two-level structural sparsity regularization for identifying lattices and defects in noisy images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xin; Belianinov, Alex; Dyck, Ondrej E.
Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Multi-scale landscape factors influencing stream water quality in the state of Oregon.
Nash, Maliha S; Heggem, Daniel T; Ebert, Donald; Wade, Timothy G; Hall, Robert K
2009-09-01
Enterococci bacteria are used to indicate the presence of human and/or animal fecal materials in surface water. In addition to human influences on the quality of surface water, a cattle grazing is a widespread and persistent ecological stressor in the Western United States. Cattle may affect surface water quality directly by depositing nutrients and bacteria, and indirectly by damaging stream banks or removing vegetation cover, which may lead to increased sediment loads. This study used the State of Oregon surface water data to determine the likelihood of animal pathogen presence using enterococci and analyzed the spatial distribution and relationship of biotic (enterococci) and abiotic (nitrogen and phosphorous) surface water constituents to landscape metrics and others (e.g. human use, percent riparian cover, natural covers, grazing, etc.). We used a grazing potential index (GPI) based on proximity to water, land ownership and forage availability. Mean and variability of GPI, forage availability, stream density and length, and landscape metrics were related to enterococci and many forms of nitrogen and phosphorous in standard and logistic regression models. The GPI did not have a significant role in the models, but forage related variables had significant contribution. Urban land use within stream reach was the main driving factor when exceeding the threshold (> or =35 cfu/100 ml), agriculture was the driving force in elevating enterococci in sites where enterococci concentration was <35 cfu/100 ml. Landscape metrics related to amount of agriculture, wetlands and urban all contributed to increasing nutrients in surface water but at different scales. The probability of having sites with concentrations of enterococci above the threshold was much lower in areas of natural land cover and much higher in areas with higher urban land use within 60 m of stream. A 1% increase in natural land cover was associated with a 12% decrease in the predicted odds of having a site exceeding the threshold. Opposite to natural land cover, a one unit change in each of manmade barren and urban land use led to an increase of the likelihood of exceeding the threshold by 73%, and 11%, respectively. Change in urban land use had a higher influence on the likelihood of a site exceeding the threshold than that of natural land cover.
Colston, Josh; Saboyá, Martha
2013-05-01
We present an example of a tool for quantifying the burden, the population in need of intervention and resources need to contribute for the control of soil-transmitted helminth (STH) infection at multiple administrative levels for the region of Latin America and the Caribbean (LAC). The tool relies on published STH prevalence data along with data on the distribution of several STH transmission determinants for 12,273 sub-national administrative units in 22 LAC countries taken from national censuses. Data on these determinants was aggregated into a single risk index based on a conceptual framework and the statistical significance of the association between this index and the STH prevalence indicators was tested using simple linear regression. The coefficient and constant from the output of this regression was then put into a regression formula that was applied to the risk index values for all of the administrative units in order to model the estimated prevalence of each STH species. We then combine these estimates with population data, treatment thresholds and unit cost data to calculate total control costs. The model predicts an annual cost for the procurement of preventive chemotherapy of around US$ 1.7 million and a total cost of US$ 47 million for implementing a comprehensive STH control programme targeting an estimated 78.7 million school-aged children according to the WHO guidelines throughout the entirety of the countries included in the study. Considerable savings to this cost could potentially be made by embedding STH control interventions within existing health programmes and systems. A study of this scope is prone to many limitations which restrict the interpretation of the results and the uses to which its findings may be put. We discuss several of these limitations.
Predictive ability of a comprehensive incremental test in mountain bike marathon.
Ahrend, Marc-Daniel; Schneeweiss, Patrick; Martus, Peter; Niess, Andreas M; Krauss, Inga
2018-01-01
Traditional performance tests in mountain bike marathon (XCM) primarily quantify aerobic metabolism and may not describe the relevant capacities in XCM. We aimed to validate a comprehensive test protocol quantifying its intermittent demands. Forty-nine athletes (38.8±9.1 years; 38 male; 11 female) performed a laboratory performance test, including an incremental test, to determine individual anaerobic threshold (IAT), peak power output (PPO) and three maximal efforts (10 s all-out sprint, 1 min maximal effort and 5 min maximal effort). Within 2 weeks, the athletes participated in one of three XCM races (n=15, n=9 and n=25). Correlations between test variables and race times were calculated separately. In addition, multiple regression models of the predictive value of laboratory outcomes were calculated for race 3 and across all races (z-transformed data). All variables were correlated with race times 1, 2 and 3: 10 s all-out sprint (r=-0.72; r=-0.59; r=-0.61), 1 min maximal effort (r=-0.85; r=-0.84; r=-0.82), 5 min maximal effort (r=-0.57; r=-0.85; r=-0.76), PPO (r=-0.77; r=-0.73; r=-0.76) and IAT (r=-0.71; r=-0.67; r=-0.68). The best-fitting multiple regression models for race 3 (r 2 =0.868) and across all races (r 2 =0.757) comprised 1 min maximal effort, IAT and body weight. Aerobic and intermittent variables correlated least strongly with race times. Their use in a multiple regression model confirmed additional explanatory power to predict XCM performance. These findings underline the usefulness of the comprehensive incremental test to predict performance in that sport more precisely.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Muñoz–Negrete, Francisco J.; Oblanca, Noelia; Rebolleda, Gema
2018-01-01
Purpose To study the structure-function relationship in glaucoma and healthy patients assessed with Spectralis OCT and Humphrey perimetry using new statistical approaches. Materials and Methods Eighty-five eyes were prospectively selected and divided into 2 groups: glaucoma (44) and healthy patients (41). Three different statistical approaches were carried out: (1) factor analysis of the threshold sensitivities (dB) (automated perimetry) and the macular thickness (μm) (Spectralis OCT), subsequently applying Pearson's correlation to the obtained regions, (2) nonparametric regression analysis relating the values in each pair of regions that showed significant correlation, and (3) nonparametric spatial regressions using three models designed for the purpose of this study. Results In the glaucoma group, a map that relates structural and functional damage was drawn. The strongest correlation with visual fields was observed in the peripheral nasal region of both superior and inferior hemigrids (r = 0.602 and r = 0.458, resp.). The estimated functions obtained with the nonparametric regressions provided the mean sensitivity that corresponds to each given macular thickness. These functions allowed for accurate characterization of the structure-function relationship. Conclusions Both maps and point-to-point functions obtained linking structure and function damage contribute to a better understanding of this relationship and may help in the future to improve glaucoma diagnosis. PMID:29850196
ERIC Educational Resources Information Center
Broder, Arndt; Schutz, Julia
2009-01-01
Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…
Wang, Jie; Shen, Changwei; Liu, Na; Jin, Xin; Fan, Xueshan; Dong, Caixia; Xu, Yangchun
2017-03-08
Non-destructive and timely determination of leaf nitrogen (N) concentration is urgently needed for N management in pear orchards. A two-year field experiment was conducted in a commercial pear orchard with five N application rates: 0 (N0), 165 (N1), 330 (N2), 660 (N3), and 990 (N4) kg·N·ha -1 . The mid-portion leaves on the year's shoot were selected for the spectral measurement first and then N concentration determination in the laboratory at 50 and 80 days after full bloom (DAB). Three methods of in-field spectral measurement (25° bare fibre under solar conditions, black background attached to plant probe, and white background attached to plant probe) were compared. We also investigated the modelling performances of four chemometric techniques (principal components regression, PCR; partial least squares regression, PLSR; stepwise multiple linear regression, SMLR; and back propagation neural network, BPNN) and three vegetation indices (difference spectral index, normalized difference spectral index, and ratio spectral index). Due to the low correlation of reflectance obtained by the 25° field of view method, all of the modelling was performed on two spectral datasets-both acquired by a plant probe. Results showed that the best modelling and prediction accuracy were found in the model established by PLSR and spectra measured with a black background. The randomly-separated subsets of calibration ( n = 1000) and validation ( n = 420) of this model resulted in high R² values of 0.86 and 0.85, respectively, as well as a low mean relative error (<6%). Furthermore, a higher coefficient of determination between the leaf N concentration and fruit yield was found at 50 DAB samplings in both 2015 (R² = 0.77) and 2014 (R² = 0.59). Thus, the leaf N concentration was suggested to be determined at 50 DAB by visible/near-infrared spectroscopy and the threshold should be 24-27 g/kg.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia
2017-04-01
Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].
A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times
NASA Astrophysics Data System (ADS)
Edmund, Jens M.; Kjer, Hans M.; Van Leemput, Koen; Hansen, Rasmus H.; Andersen, Jon AL; Andreasen, Daniel
2014-12-01
Radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, so-called MRI-only RT, would remove the systematic registration error between MR and computed tomography (CT), and provide co-registered MRI for assessment of treatment response and adaptive RT. Electron densities, however, need to be assigned to the MRI images for dose calculation and patient setup based on digitally reconstructed radiographs (DRRs). Here, we investigate the geometric and dosimetric performance for a number of popular voxel-based methods to generate a so-called pseudo CT (pCT). Five patients receiving cranial irradiation, each containing a co-registered MRI and CT scan, were included. An ultra short echo time MRI sequence for bone visualization was used. Six methods were investigated for three popular types of voxel-based approaches; (1) threshold-based segmentation, (2) Bayesian segmentation and (3) statistical regression. Each approach contained two methods. Approach 1 used bulk density assignment of MRI voxels into air, soft tissue and bone based on logical masks and the transverse relaxation time T2 of the bone. Approach 2 used similar bulk density assignments with Bayesian statistics including or excluding additional spatial information. Approach 3 used a statistical regression correlating MRI voxels with their corresponding CT voxels. A similar photon and proton treatment plan was generated for a target positioned between the nasal cavity and the brainstem for all patients. The CT agreement with the pCT of each method was quantified and compared with the other methods geometrically and dosimetrically using both a number of reported metrics and introducing some novel metrics. The best geometrical agreement with CT was obtained with the statistical regression methods which performed significantly better than the threshold and Bayesian segmentation methods (excluding spatial information). All methods agreed significantly better with CT than a reference water MRI comparison. The mean dosimetric deviation for photons and protons compared to the CT was about 2% and highest in the gradient dose region of the brainstem. Both the threshold based method and the statistical regression methods showed the highest dosimetrical agreement. Generation of pCTs using statistical regression seems to be the most promising candidate for MRI-only RT of the brain. Further, the total amount of different tissues needs to be taken into account for dosimetric considerations regardless of their correct geometrical position.
Shen, Jing; Hu, Yanyun; Liu, Fang; Zeng, Hui; Li, Lianxi; Zhao, Jun; Zhao, Jungong; Zheng, Taishan; Lu, Huijuan; Lu, Fengdi; Bao, Yuqian; Jia, Weiping
2013-10-01
We investigated the relationship between vibration perception threshold and diabetic retinopathy and verified the screening value of vibration perception threshold for severe diabetic retinopathy. A total of 955 patients with type 2 diabetes were recruited and divided into three groups according to their fundus oculi photography results: no diabetic retinopathy (n = 654, 68.48%), non-sight-threatening diabetic retinopathy (n = 189, 19.79%) and sight-threatening diabetic retinopathy (n = 112, 11.73%). Their clinical and biochemical characteristics, vibration perception threshold and the diabetic retinopathy grades were detected and compared. There were significant differences in diabetes duration and blood glucose levels among three groups (all p < 0.05). The values of vibration perception threshold increased with the rising severity of retinopathy, and the vibration perception threshold level of sight-threatening diabetic retinopathy group was significantly higher than both non-sight-threatening diabetic retinopathy and no diabetic retinopathy groups (both p < 0.01). The prevalence of sight-threatening diabetic retinopathy in vibration perception threshold >25 V group was significantly higher than those in 16-24 V group (p < 0.01). The severity of diabetic retinopathy was positively associated with diabetes duration, blood glucose indexes and vibration perception threshold (all p < 0.01). Multiple stepwise regression analysis proved that glycosylated haemoglobin (β = 0.385, p = 0.000), diabetes duration (β = 0.275, p = 0.000) and vibration perception threshold (β = 0.180, p = 0.015) were independent risk factors for diabetic retinopathy. Receiver operating characteristic analysis further revealed that vibration perception threshold higher than 18 V was the optimal cut point for reflecting high risk of sight-threatening diabetic retinopathy (odds ratio = 4.20, 95% confidence interval = 2.67-6.59). There was a close association between vibration perception threshold and the severity of diabetic retinopathy. vibration perception threshold was a potential screening method for diabetic retinopathy, and its optimal cut-off for prompting high risk of sight-threatening retinopathy was 18 V. Copyright © 2013 John Wiley & Sons, Ltd.
Carisse, Odile; McNealis, Vanessa
2018-01-01
Black seed disease (BSD) of strawberry is a sporadic disease caused by Mycosphaerella fragariae. Because little is known about potential crop losses or the weather conditions conducive to disease development, fungicides are generally not applied or are applied based on a preset schedule. Data collected from 2000 to 2011 representing 50 farm-years (total of 186 strawberry fields) were used to determine potential crop losses and to study the influence of weather on disease occurrence and development. First, logistic regression was used to model the relationship between occurrence of BSD and weather variables. Second, linear and nonlinear regressions were used to model the number of black seed per berry (severity) and the percentage of diseased berries (incidence). Of the 186 fields monitored, 78 showed black seed symptoms, and the number of black seed per berry ranged from 1 to 10, whereas the percentage of diseased berries ranged from 3 to 32%. The most influential weather variable was total rainfall (in millimeters) in May, with a threshold of 103 mm of rain (absence of BSD < 103 mm < presence of BSD). Similarly, nonlinear models with the total rainfall in May accurately predicted both disease severity and incidence (r = 0.94 and 0.97, respectively). Considering that management actions such as fungicide application are not needed every year in every field, these models could be used to identify fields that are at risk of BSD.
Temporal discrimination threshold with healthy aging.
Ramos, Vesper Fe Marie Llaneza; Esquenazi, Alina; Villegas, Monica Anne Faye; Wu, Tianxia; Hallett, Mark
2016-07-01
The temporal discrimination threshold (TDT) is the shortest interstimulus interval at which a subject can perceive successive stimuli as separate. To investigate the effects of aging on TDT, we studied tactile TDT using the method of limits with 120% of sensory threshold in each hand for each of 100 healthy volunteers, equally divided among men and women, across 10 age groups, from 18 to 79 years. Linear regression analysis showed that age was significantly related to left-hand mean, right-hand mean, and mean of 2 hands with R-square equal to 0.08, 0.164, and 0.132, respectively. Reliability analysis indicated that the 3 measures had fair-to-good reliability (intraclass correlation coefficient: 0.4-0.8). We conclude that TDT is affected by age and has fair-to-good reproducibility using our technique. Published by Elsevier Inc.
Groundwater Controls on Vegetation Composition and Patterning in Mountain Meadows
NASA Astrophysics Data System (ADS)
Loheide, S. P.; Lowry, C.; Moore, C. E.; Lundquist, J. D.
2010-12-01
Mountain meadows are groundwater dependent ecosystems that are hotspots of biodiversity and productivity in the Sierra Nevada of California. Meadow vegetation relies on shallow groundwater during the region’s dry summer growing season. Vegetation composition in this environment is influenced both by 1) oxygen stress that occurs when portions of the root zone are saturated and anaerobic conditions are created that limit root respiration and 2) water stress that occurs when the water table drops and water-limited conditions are created in the root zone. A watershed model that explicitly accounts for snowmelt processes was linked to a fine resolution groundwater flow model of Tuolumne Meadows in Yosemite National Park, CA to simulated spatially distributed water table dynamics. This linked hydrologic model was calibrated to observations from a well observation network for 2006-2008, and validated using data from 2009. A vegetation survey was also conducted at the site in which the three dominant species were identified at more than 200 plots distributed across the meadow. Nonparametric multiplicative regression was performed to create and select the best models for predicting vegetation dominance based on simulated hydrologic regime. The hydrologic niche of three vegetation types representing wet, moist, and dry meadow vegetation communities was best described using both 1) a sum exceedance value calculated as the integral of water table position above a threshold of oxygen stress and 2) a sum deceedance value calculated as the integral of water table position below a threshold of water stress. This linked hydrologic and vegetative modeling framework advances our ability to predict the propagation of human-induced climatic and land-use/-cover changes through the hydrologic system to the ecosystem.
Epidemic spreading with activity-driven awareness diffusion on multiplex network.
Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming
2016-04-01
There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.
Epidemic spreading with activity-driven awareness diffusion on multiplex network
NASA Astrophysics Data System (ADS)
Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming
2016-04-01
There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.
Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J
2003-11-01
The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.
Music, Mark; Finderle, Zarko; Cankar, Ksenija
2011-05-01
The aim of the present study was to investigate the effect of quantitatively measured cold perception (CP) thresholds on microcirculatory response to local cooling as measured by direct and indirect response of laser-Doppler (LD) flux during local cooling at different temperatures. The CP thresholds were measured in 18 healthy males using the Marstock method (thermode placed on the thenar). The direct (at the cooling site) and indirect (on contralateral hand) LD flux responses were recorded during immersion of the hand in a water bath at 20°C, 15°C, and 10°C. The cold perception threshold correlated (linear regression analysis, Pearson correlation) with the indirect LD flux response at cooling temperatures 20°C (r=0.782, p<0.01) and 15°C (r=0.605, p<0.01). In contrast, there was no correlation between the CP threshold and the indirect LD flux response during cooling in water at 10°C. The results demonstrate that during local cooling, depending on the cooling temperature used, cold perception threshold influences indirect LD flux response. Copyright © 2011 Elsevier Inc. All rights reserved.
Real-time anomaly detection for very short-term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Jian; Hong, Tao; Yue, Meng
Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less
Real-time anomaly detection for very short-term load forecasting
Luo, Jian; Hong, Tao; Yue, Meng
2018-01-06
Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less
Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation
NASA Technical Reports Server (NTRS)
Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.;
2014-01-01
frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.
NASA Astrophysics Data System (ADS)
Van Stan, John T.; Gay, Trent E.; Lewis, Elliott S.
2016-02-01
Forest canopies alter rainfall reaching the surface by redistributing it as throughfall. Throughfall supplies water and nutrients to a variety of ecohydrological components (soil microbial communities, stream water discharge/chemistry, and stormflow pathways) and is controlled by canopy structural interactions with meteorological conditions across temporal scales. This work introduces and applies multiple correspondence analyses (MCAs) to a range of meteorological thresholds (median intensity, median absolute deviation (MAD) of intensity, median wind-driven droplet inclination angle, and MAD of wind speed) for an example throughfall problem: identification of interacting storm conditions corresponding to temporal concentration in relative throughfall beyond the median observation (⩾73% of rain). MCA results from the example show that equalling or exceeding rain intensity thresholds (median and MAD) corresponded with temporal concentration of relative throughfall across all storms. Under these intensity conditions, two wind mechanisms produced significant correspondences: (1) high, steady wind-driven droplet inclination angles increased surface wetting; and (2) sporadic winds shook entrained droplets from surfaces. A discussion is provided showing that these example MCA findings agree well with previous work relying on more historically common methods (e.g., multiple regression and analytical models). Meteorological threshold correspondences to temporal concentration of relative throughfall at our site may be a function of heavy Tillandsia usneoides coverage. Applications of MCA within other forests may provide useful insights to how temporal throughfall dynamics are affected for drainage pathways dependent on different structures (leaves, twigs, branches, etc.).
Experimental evidence of a pathogen invasion threshold
Krkošek, Martin
2018-01-01
Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876
Mastitis of periparturient Holstein cattle: a phenotypic and genetic study.
Detilleux, J C; Kehrli, M E; Freeman, A E; Fox, L K; Kelley, D H
1995-10-01
Environmental and genetic factors affecting somatic cell scores, clinical mastitis, and IMI by minor and major pathogens were studied on 137 periparturient Holstein cows selected for milk production. Environmental effects were obtained by generalized least squares and logistic regression. Genetic parameters were from BLUP and threshold animal models. Lactation number affected the number of quarters with clinical mastitis and the number of quarters infected with minor pathogens. The DIM affected somatic cell score and number of quarters infected with major pathogens. Heritabilities for all mastitis indicators averaged 10%, but differences occurred among the indicators. Correlations between breeding values of the number of quarters infected with minor pathogens and the number infected with major pathogens were antagonistic and statistically significant.
The threshold of a stochastic delayed SIR epidemic model with vaccination
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing
2016-11-01
In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.
Dentists' perspectives on caries-related treatment decisions.
Gomez, J; Ellwood, R P; Martignon, S; Pretty, I A
2014-06-01
To assess the impact of patient risk status on Colombian dentists' caries related treatment decisions for early to intermediate caries lesions (ICDAS code 2 to 4). A web-based questionnaire assessed dentists' views on the management of early/intermediate lesions. The questionnaire included questions on demographic characteristics, five clinical scenarios with randomised levels of caries risk, and two questions on different clinical and radiographic sets of images with different thresholds of caries. Questionnaires were completed by 439 dentists. For the two scenarios describing occlusal lesions ICDAS code 2, dentists chose to provide a preventive option in 63% and 60% of the cases. For the approximal lesion ICDAS code 2, 81% of the dentists chose to restore. The main findings of the binary logistic regression analysis for the clinical scenarios suggest that for the ICDAS code 2 occlusal lesions, the odds of a high caries risk patient having restorations is higher than for a low caries risk patient. For the questions describing different clinical thresholds of caries, most dentists would restore at ICDAS code 2 (55%) and for the question showing different radiographic thresholds images, 65% of dentists would intervene operatively at the inner half of enamel. No significant differences with respect to risk were found for these questions with the logistic regression. The results of this study indicate that Colombian dentists have not yet fully adopted non-invasive treatment for early caries lesions.
Short communication: Effect of heat stress on nonreturn rate of Italian Holstein cows.
Biffani, S; Bernabucci, U; Vitali, A; Lacetera, N; Nardone, A
2016-07-01
The data set consisted of 1,016,856 inseminations of 191,012 first, second, and third parity Holstein cows from 484 farms. Data were collected from year 2001 through 2007 and included meteorological data from 35 weather stations. Nonreturn rate at 56 d after first insemination (NR56) was considered. A logit model was used to estimate the effect of temperature-humidity index (THI) on reproduction across parities. Then, least squares means were used to detect the THI breakpoints using a 2-phase linear regression procedure. Finally, a multiple-trait threshold model was used to estimate variance components for NR56 in first and second parity cows. A dummy regression variable (t) was used to estimate NR56 decline due to heat stress. The NR56, both for first and second parity cows, was significantly (unfavorable) affected by THI from 4 d before 5 d after the insemination date. Additive genetic variances for NR56 increased from first to second parity both for general and heat stress effect. Genetic correlations between general and heat stress effects were -0.31 for first parity and -0.45 for second parity cows. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
The Electromyographic Threshold in Girls and Women.
Long, Devon; Dotan, Raffy; Pitt, Brynlynn; McKinlay, Brandon; O'Brien, Thomas D; Tokuno, Craig; Falk, Bareket
2017-02-01
The electromyographic threshold (EMG Th ) is thought to reflect increased high-threshold/type-II motor-unit (MU) recruitment and was shown higher in boys than in men. Women differ from men in muscular function. Establish whether females' EMG Th and girls-women differences are different than males'. Nineteen women (22.9 ± 3.3yrs) and 20 girls (10.3 ± 1.1yrs) had surface EMG recorded from the right and left vastus lateralis muscles during ramped cycle-ergometry to exhaustion. EMG root-mean-squares were averaged per pedal revolution. EMG Th was determined as the least residual sum of squares for any two regression-line data divisions, if the trace rose ≥ 3SD above its regression line. EMG Th was expressed as % final power-output (%Pmax) and %VO 2 pk power (%P VO2pk ). EMG Th was detected in 13 (68%) of women, but only 9 (45%) of girls (p < .005) and tended to be higher in the girls (%Pmax= 88.6 ± 7.0 vs. 83.0 ± 6.9%, p = .080; %P VO2pk = (101.6 ± 17.6 vs. 90.6 ± 7.8%, p = .063). When EMG Th was undetected it was assumed to occur at 100%Pmax or beyond. Consequently, EMG Th values turned significantly higher in girls than in women (94.8 ± 7.4 vs. 88.4 ± 9.9%Pmax, p = .026; and 103.2 ± 11.7 vs. 95.2 ± 9.9%P VO2pk , p = .028). During progressive exercise, girls appear to rely less on higher-threshold/type-II MUs than do women, suggesting differential muscle activation strategy.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
NASA Astrophysics Data System (ADS)
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
NASA Astrophysics Data System (ADS)
Wu, Qiaoli; Song, Jinling; Wang, Jindi; Xiao, Zhiqiang
2014-11-01
Leaf Area Index (LAI) is an important biophysical variable for vegetation. Compared with vegetation indexes like NDVI and EVI, LAI is more capable of monitoring forest canopy growth quantitatively. GLASS LAI is a spatially complete and temporally continuous product derived from AVHRR and MODIS reflectance data. In this paper, we present the approach to build dynamic LAI growth models for young and mature Larix gmelinii forest in north Daxing'anling in Inner Mongolia of China using the Dynamic Harmonic Regression (DHR) model and Double Logistic (D-L) model respectively, based on the time series extracted from multi-temporal GLASS LAI data. Meanwhile we used the dynamic threshold method to attract the key phenological phases of Larix gmelinii forest from the simulated time series. Then, through the relationship analysis between phenological phases and the meteorological factors, we found that the annual peak LAI and the annual maximum temperature have a good correlation coefficient. The results indicate this forest canopy growth dynamic model to be very effective in predicting forest canopy LAI growth and extracting forest canopy LAI growth dynamic.
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
New flux based dose-response relationships for ozone for European forest tree species.
Büker, P; Feng, Z; Uddling, J; Briolat, A; Alonso, R; Braun, S; Elvira, S; Gerosa, G; Karlsson, P E; Le Thiec, D; Marzuoli, R; Mills, G; Oksanen, E; Wieser, G; Wilkinson, M; Emberson, L D
2015-11-01
To derive O3 dose-response relationships (DRR) for five European forest trees species and broadleaf deciduous and needleleaf tree plant functional types (PFTs), phytotoxic O3 doses (PODy) were related to biomass reductions. PODy was calculated using a stomatal flux model with a range of cut-off thresholds (y) indicative of varying detoxification capacities. Linear regression analysis showed that DRR for PFT and individual tree species differed in their robustness. A simplified parameterisation of the flux model was tested and showed that for most non-Mediterranean tree species, this simplified model led to similarly robust DRR as compared to a species- and climate region-specific parameterisation. Experimentally induced soil water stress was not found to substantially reduce PODy, mainly due to the short duration of soil water stress periods. This study validates the stomatal O3 flux concept and represents a step forward in predicting O3 damage to forests in a spatially and temporally varying climate. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Djulbegovic, Benjamin; van den Ende, Jef; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Pauker, Stephen G
2015-05-01
The threshold model represents an important advance in the field of medical decision-making. It is a linchpin between evidence (which exists on the continuum of credibility) and decision-making (which is a categorical exercise - we decide to act or not act). The threshold concept is closely related to the question of rational decision-making. When should the physician act, that is order a diagnostic test, or prescribe treatment? The threshold model embodies the decision theoretic rationality that says the most rational decision is to prescribe treatment when the expected treatment benefit outweighs its expected harms. However, the well-documented large variation in the way physicians order diagnostic tests or decide to administer treatments is consistent with a notion that physicians' individual action thresholds vary. We present a narrative review summarizing the existing literature on physicians' use of a threshold strategy for decision-making. We found that the observed variation in decision action thresholds is partially due to the way people integrate benefits and harms. That is, explanation of variation in clinical practice can be reduced to a consideration of thresholds. Limited evidence suggests that non-expected utility threshold (non-EUT) models, such as regret-based and dual-processing models, may explain current medical practice better. However, inclusion of costs and recognition of risk attitudes towards uncertain treatment effects and comorbidities may improve the explanatory and predictive value of the EUT-based threshold models. The decision when to act is closely related to the question of rational choice. We conclude that the medical community has not yet fully defined criteria for rational clinical decision-making. The traditional notion of rationality rooted in EUT may need to be supplemented by reflective rationality, which strives to integrate all aspects of medical practice - medical, humanistic and socio-economic - within a coherent reasoning system. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.
Fractal Approach to Erosion Threshold of Bentonites
NASA Astrophysics Data System (ADS)
Xu, Y. F.; Li, X. Y.
Bentonite has been considered as a candidate buffer material for the disposal of high-level radioactive waste (HLW) because of its low permeability, high sorption capacity, self-sealing characteristics and durability in a natural environment. Bentonite erosion caused by groundwater flow may take place at the interface of the compacted bentonite and fractured granite. Surface erosion of bentonite flocs is represented typically as an erosion threshold. Predicting the erosion threshold of bentonite flocs requires taking into account cohesion, which results from interactions between clay particles. Beyond the usual dependence on grain size, a significant correlation between erosion threshold and porosity measurements is confirmed for bentonite flocs. A fractal model for erosion threshold of bentonite flocs is proposed. Cohesion forces, the long-range van der Waals interaction between two clay particles are taken as the resource of the erosion threshold. The model verification is conducted by the comparison with experiments published in the literature. The results show that the proposed model for erosion threshold is in good agreement with the experimental data.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Boisson, Sophie; Willis, Rebecca; Bakhtiari, Ana; al-Khatib, Tawfik; Amer, Khaled; Batcho, Wilfrid; Courtright, Paul; Dejene, Michael; Goepogui, Andre; Kalua, Khumbo; Kebede, Biruck; Macleod, Colin K.; Madeleine, Kouakou IIunga Marie; Mbofana, Mariamo Saide Abdala; Mpyet, Caleb; Ndjemba, Jean; Olobio, Nicholas; Pavluck, Alexandre L.; Sokana, Oliver; Southisombath, Khamphoua; Taleo, Fasihah
2018-01-01
Background Facial cleanliness and sanitation are postulated to reduce trachoma transmission, but there are no previous data on community-level herd protection thresholds. We characterize associations between active trachoma, access to improved sanitation facilities, and access to improved water sources for the purpose of face washing, with the aim of estimating community-level or herd protection thresholds. Methods and findings We used cluster-sampled Global Trachoma Mapping Project data on 884,850 children aged 1–9 years from 354,990 households in 13 countries. We employed multivariable mixed-effects modified Poisson regression models to assess the relationships between water and sanitation coverage and trachomatous inflammation—follicular (TF). We observed lower TF prevalence among those with household-level access to improved sanitation (prevalence ratio, PR = 0.87; 95%CI: 0.83–0.91), and household-level access to an improved washing water source in the residence/yard (PR = 0.81; 95%CI: 0.75–0.88). Controlling for household-level water and latrine access, we found evidence of community-level protection against TF for children living in communities with high sanitation coverage (PR80–90% = 0.87; 95%CI: 0.73–1.02; PR90–100% = 0.76; 95%CI: 0.67–0.85). Community sanitation coverage levels greater than 80% were associated with herd protection against TF (PR = 0.77; 95%CI: 0.62–0.97)—that is, lower TF in individuals whose households lacked individual sanitation but who lived in communities with high sanitation coverage. For community-level water coverage, there was no apparent threshold, although we observed lower TF among several of the higher deciles of community-level water coverage. Conclusions Our study provides insights into the community water and sanitation coverage levels that might be required to best control trachoma. Our results suggest access to adequate water and sanitation can be important components in working towards the 2020 target of eliminating trachoma as a public health problem. PMID:29357365
Segmentation of singularity maps in the context of soil porosity
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.
2016-04-01
Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).
Investment threshold and management reflection for industrial system cleaning: a case for China.
Fang, Yiping
2012-03-01
The recognition that industrial activity plays an essential role in a sustainable society is now widespread. To understand the causal relationship between industrial pollution abatement expenditure and industrial system cleaning level in China is of considerable importance, especially under extremely rapid industrial growth and serious pressure of industrial pollutants abatement context. We use composite index assessment method and regression analysis in this paper. We establish the mathematical model between composite industrial cleaner index and investment intensity for industrial pollutants abatement, and analyze the effects of industrial pollutants treatment and discharge indicators on composite industrial cleaner index in China. Results show that: (1) There is significant nonlinear relationship between composite industrial cleaner index and investment intensity for industrial pollutants abatement. (2) From single indicator perspective, the effect of investment intensity on pollutants treatment indicators is positively, on the contrary, the effect of investment intensity on pollutants discharge indicators is negatively; (3) From decomposition cleaner index perspective, the effect of pollutants discharge level (process control) is higher than pollutants treatment capacity (end-of-pipe) on composite industrial cleaner index; (4) There is threshold between investment intensity and composite cleaner industrial index, it is a crucial reference scale for industrial environmental management in selected period.
Relationship between consonant recognition in noise and hearing threshold.
Yoon, Yang-soo; Allen, Jont B; Gooler, David M
2012-04-01
Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, HT (f), grouping HI listeners with HT (f) is widely practiced. In this article, the relationship between consonant recognition and HT (f) is considered over a range of signal-to-noise ratios (SNRs). Confusion matrices (CMs) from 25 HI ears were generated in response to 16 consonant-vowel syllables presented at 6 different SNRs. Individual differences scaling (INDSCAL) was applied to both feature-based matrices and CMs in order to evaluate the relationship between HT (f) and consonant recognition among HI listeners. The results showed no predictive relationship between the percent error scores (Pe) and HT (f) across SNRs. The multiple regression models showed that the HT (f) accounted for 39% of the total variance of the slopes of the Pe. Feature-based INDSCAL analysis showed consistent grouping of listeners across SNRs, but not in terms of HT (f). Systematic relationship between measures was also not defined by CM-based INDSCAL analysis across SNRs. HT (f) did not account for the majority of the variance (39%) in consonant recognition in noise when the complete body of the CM was considered.
Do Hearing Protectors Protect Hearing?
Groenewold, Matthew R.; Masterson, Elizabeth A.; Themann, Christa L.; Davis, Rickie R.
2015-01-01
Background We examined the association between self-reported hearing protection use at work and incidence of hearing shifts over a 5-year period. Methods Audiometric data from 19,911 workers were analyzed. Two hearing shift measures—OSHA standard threshold shift (OSTS) and high-frequency threshold shift (HFTS)—were used to identify incident shifts in hearing between workers’ 2005 and 2009 audiograms. Adjusted odds ratios were generated using multivariable logistic regression with multi-level modeling. Results The odds ratio for hearing shift for workers who reported never versus always wearing hearing protection was nonsignificant for OSTS (OR 1.23, 95% CI 0.92–1.64) and marginally significant for HFTS (OR 1.26, 95% CI 1.00–1.59). A significant linear trend towards increased risk of HFTS with decreased use of hearing protection was observed (P = 0.02). Conclusion The study raises concern about the effectiveness of hearing protection as a substitute for noise control to prevent noise-induced hearing loss in the workplace. Am. J. Ind. Med. 57:1001–1010, 2014. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. PMID:24700499
Hydrologic Drought in the Colorado River Basin
NASA Astrophysics Data System (ADS)
Timilsena, J.; Piechota, T.; Hidalgo, H.; Tootle, G.
2004-12-01
This paper focuses on drought scenarios of the Upper Colorado River Basin (UCRB) for the last five hundred years and evaluates the magnitude, severity and frequency of the current five-year drought. Hydrologic drought characteristics have been developed using the historical streamflow data and tree ring chronologies in the UCRB. Historical data include the Colorado River at Cisco and Lees Ferry, Green River, Palmer Hydrologic Drought Index (PHDI), and the Z index. Three ring chronologies were used from 17 spatially representative sites in the UCRB from NOAA's International Tree Ring Data. A PCA based regression model procedures was used to reconstruct drought indices and streamflow in the UCRB. Hydrologic drought is characterized by its duration (duration in year in which cumulative deficit is continuously below thresholds), deficit magnitude (the cumulative deficit below the thresholds for consecutive years), severity (magnitude divided by the duration) and frequency. Results indicate that the current drought ranks anywhere from the 5th to 20th worst drought during the period 1493-2004, depending on the drought indicator and magnitude. From a short term perspective (using annual data), the current drought is more severe than if longer term average (i.e., 5 or 10 year averages) are used to define the drought.
Threshold model of cascades in empirical temporal networks
NASA Astrophysics Data System (ADS)
Karimi, Fariba; Holme, Petter
2013-08-01
Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.
NASA Astrophysics Data System (ADS)
Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.
2018-01-01
Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a constant glacier area results in a drastic decrease of the number of droughts. The drought characteristics between future and historical periods are more similar when the transient threshold is used, for both glacier area conceptualisations. With the transient threshold, factors causing future droughts can be analysed. This study revealed the different effects of methodological choices on future streamflow drought projections and it highlights how the options can be used to analyse different aspects of future droughts: the transient threshold for analysing future drought processes, the historical threshold to assess changes between periods, the constant glacier area to analyse the effect of short-term climate variability on droughts and the dynamic glacier area to model more realistic future discharges under climate change.
Berlin, Conny; Blanch, Carles; Lewis, David J; Maladorno, Dionigi D; Michel, Christiane; Petrin, Michael; Sarp, Severine; Close, Philippe
2012-06-01
The detection of safety signals with medicines is an essential activity to protect public health. Despite widespread acceptance, it is unclear whether recently applied statistical algorithms provide enhanced performance characteristics when compared with traditional systems. Novartis has adopted a novel system for automated signal detection on the basis of disproportionality methods within a safety data mining application (Empirica™ Signal System [ESS]). ESS uses two algorithms for routine analyses: empirical Bayes Multi-item Gamma Poisson Shrinker and logistic regression (LR). A model was developed comprising 14 medicines, categorized as "new" or "established." A standard was prepared on the basis of safety findings selected from traditional sources. ESS results were compared with the standard to calculate the positive predictive value (PPV), specificity, and sensitivity. PPVs of the lower one-sided 5% and 0.05% confidence limits of the Bayes geometric mean (EB05) and of the LR odds ratio (LR0005) almost coincided for all the drug-event combinations studied. There was no obvious difference comparing the PPV of the leading Medical Dictionary for Regulatory Activities (MedDRA) terms to the PPV for all terms. The PPV of narrow MedDRA query searches was higher than that for broad searches. The widely used threshold value of EB05 = 2.0 or LR0005 = 2.0 together with more than three spontaneous reports of the drug-event combination produced balanced results for PPV, sensitivity, and specificity. Consequently, performance characteristics were best for leading terms with narrow MedDRA query searches irrespective of applying Multi-item Gamma Poisson Shrinker or LR at a threshold value of 2.0. This research formed the basis for the configuration of ESS for signal detection at Novartis. Copyright © 2011 John Wiley & Sons, Ltd.
David, Michael C; Eley, Diann S; Schafer, Jennifer; Davies, Leo
2016-01-01
The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA) for performance in the International Foundations of Medicine (IFOM) Clinical Science Examination (CSE). A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners' International Standard of Competence. Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC) curves. Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53) and international status (B: -37.40; 95% CI: -57.85 to -16.96) from 427 students were found to be statistically associated with increased IFOM CSE performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82) and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78) were identified as being thresholds of significant risk for domestic and international students, respectively. Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners' International Standard of Competence.
Peterson, Lance R; Samia, Noelle I; Skinner, Andrew M; Chopra, Amit; Smith, Becky
2017-01-01
The quantitative relationship between antimicrobial agent consumption and rise or fall of antibiotic resistance has rarely been studied. We began all admission surveillance testing for methicillin-resistant Staphylococcus aureus (MRSA) in August 2005 with subsequent contact isolation and decolonization using nasally applied mupirocin ointment for those colonized. In October 2012, we discontinued decolonization of medical (nonsurgical service) patients. We conducted a retrospective study from 2007 through 2014 of 445680 patients; 35235 were assessed because of mupirocin therapy and positive test results for MRSA. We collected data on those patients receiving 2% mupirocin ointment for decolonization to determine the defined daily doses (DDDs). A nonparametric regression technique was used to quantitate the effect of mupirocin consumption on drug resistance in MRSA. Using regressive modeling, we found that, when consumption was consistently >25 DDD/1000 patient-days, there was a statistically significant increase in mupirocin resistance with a correlating positive rate of change. When consumption was ≤25 DDD/1000 patient-days, there was a statistically significant decrease in mupirocin resistance with a correlating negative rate of change. The scatter plot of fitted versus observed mupirocin resistance values showed an R 2 value of 0.89-a high correlation between mupirocin use and resistance. Use of the antimicrobial agent mupirocin for decolonization had a threshold of approximately 25 DDD/1000 patient-days that separated a rise and fall of resistance within the acute-care setting. This has implications for how widely mupirocin can be used for decolonization, as well as for setting consumption thresholds when prescribing antimicrobials as part of stewardship programs. © The Author 2017. Published by Oxford University Press on behalf of Infectious Diseases Society of America.
Gou, Faxiang; Liu, Xinfeng; He, Jian; Liu, Dongpeng; Cheng, Yao; Liu, Haixia; Yang, Xiaoting; Wei, Kongfu; Zheng, Yunhe; Jiang, Xiaojuan; Meng, Lei; Hu, Wenbiao
2018-01-08
To determine the linear and non-linear interacting relationships between weather factors and hand, foot and mouth disease (HFMD) in children in Gansu, China, and gain further traction as an early warning signal based on weather variability for HFMD transmission. Weekly HFMD cases aged less than 15 and meteorological information from 2010 to 2014 in Jiuquan, Lanzhou and Tianshu, Gansu, China were collected. Generalized linear regression models (GLM) with Poisson link and classification and regression trees (CART) were employed to determine the combined and interactive relationship of weather factors and HFMD in both linear and non-linear ways. GLM suggested an increase in weekly HFMD of 5.9% [95% confidence interval (CI): 5.4%, 6.5%] in Tianshui, 2.8% [2.5%, 3.1%] in Lanzhou and 1.8% [1.4%, 2.2%] in Jiuquan in association with a 1 °C increase in average temperature, respectively. And 1% increase of relative humidity could increase weekly HFMD of 2.47% [2.23%, 2.71%] in Lanzhou and 1.11% [0.72%, 1.51%] in Tianshui. CART revealed that average temperature and relative humidity were the first two important determinants, and their threshold values for average temperature deceased from 20 °C of Jiuquan to 16 °C in Tianshui; and for relative humidity, threshold values increased from 38% of Jiuquan to 65% of Tianshui. Average temperature was the primary weather factor in three areas, more sensitive in southeast Tianshui, compared with northwest Jiuquan; Relative humidity's effect on HFMD showed a non-linear interacting relationship with average temperature.
Matthews, Luke J; DeWan, Peter; Rula, Elizabeth Y
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network.
Puig, Josep; Blasco, Gerard; Daunis-i-Estadella, Pepus; van Eendendburg, Cecile; Carrillo-García, María; Aboud, Carlos; Hernández-Pérez, María; Serena, Joaquín; Biarnés, Carles; Nael, Kambiz; Liebeskind, David S.; Thomalla, Götz; Menon, Bijoy K.; Demchuk, Andrew; Wintermark, Max; Pedraza, Salvador
2017-01-01
Objective Blood-brain barrier (BBB) permeability has been proposed as a predictor of hemorrhagic transformation (HT) after tissue plasminogen activator (tPA) administration; however, the reliability of perfusion computed tomography (PCT) permeability imaging for predicting HT is uncertain. We aimed to determine the performance of high-permeability region size on PCT (HPrs-PCT) in predicting HT after intravenous tPA administration in patients with acute stroke. Methods We performed a multimodal CT protocol (non-contrast CT, PCT, CT angiography) to prospectively study patients with middle cerebral artery occlusion treated with tPA within 4.5 hours of symptom onset. HT was graded at 24 hours using the European-Australasian Acute Stroke Study II criteria. ROC curves selected optimal volume threshold, and multivariate logistic regression analysis identified predictors of HT. Results The study included 156 patients (50% male, median age 75.5 years). Thirty-seven (23,7%) developed HT [12 (7,7%), parenchymal hematoma type 2 (PH-2)]. At admission, patients with HT had lower platelet values, higher NIHSS scores, increased ischemic lesion volumes, larger HPrs-PCT, and poorer collateral status. The negative predictive value of HPrs-PCT at a threshold of 7mL/100g/min was 0.84 for HT and 0.93 for PH-2. The multiple regression analysis selected HPrs-PCT at 7mL/100g/min combined with platelets and baseline NIHSS score as the best model for predicting HT (AUC 0.77). HPrs-PCT at 7mL/100g/min was the only independent predictor of PH-2 (OR 1, AUC 0.68, p = 0.045). Conclusions HPrs-PCT can help predict HT after tPA, and is particularly useful in identifying patients at low risk of developing HT. PMID:29182658
Matthews, Luke J.; DeWan, Peter; Rula, Elizabeth Y.
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network. PMID:23418436
Waite, Ian R.
2014-01-01
As part of the USGS study of nutrient enrichment of streams in agricultural regions throughout the United States, about 30 sites within each of eight study areas were selected to capture a gradient of nutrient conditions. The objective was to develop watershed disturbance predictive models for macroinvertebrate and algal metrics at national and three regional landscape scales to obtain a better understanding of important explanatory variables. Explanatory variables in models were generated from landscape data, habitat, and chemistry. Instream nutrient concentration and variables assessing the amount of disturbance to the riparian zone (e.g., percent row crops or percent agriculture) were selected as most important explanatory variable in almost all boosted regression tree models regardless of landscape scale or assemblage. Frequently, TN and TP concentration and riparian agricultural land use variables showed a threshold type response at relatively low values to biotic metrics modeled. Some measure of habitat condition was also commonly selected in the final invertebrate models, though the variable(s) varied across regions. Results suggest national models tended to account for more general landscape/climate differences, while regional models incorporated both broad landscape scale and more specific local-scale variables.
Thresholds and the rising pion inclusive cross section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, S.T.
In the context of the hypothesis of the Pomeron-f identity, it is shown that the rising pion inclusive cross section can be explained over a wide range of energies as a series of threshold effects. Low-mass thresholds are seen to be important. In order to understand the contributions of high-mass thresholds (flavoring), a simple two-channel multiperipheral model is examined. The analysis sheds light on the relation between thresholds and Mueller-Regge couplings. In particular, it is seen that inclusive-, and total-cross-section threshold mechanisms may differ. A quantitative model based on this idea and utilizing previous total-cross-section fits is seen to agreemore » well with experiment.« less
Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.
Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen
2013-01-01
We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.
NASA Astrophysics Data System (ADS)
Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.
2009-10-01
In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.
Brosowski, Tim; Hayer, Tobias; Meyer, Gerhard; Rumpf, Hans-Jürgen; John, Ulrich; Bischof, Anja; Meyer, Christian
2015-09-01
Consumption measures in gambling research may help to establish thresholds of low-risk gambling as 1 part of evidence-based responsible gambling strategies. The aim of this study is to replicate existing Canadian thresholds of probable low-risk gambling (Currie et al., 2006) in a representative dataset of German gambling behavior (Pathological Gambling and Epidemiology [PAGE]; N = 15,023). Receiver-operating characteristic curves applied in a training dataset (60%) extracted robust thresholds of low-risk gambling across 4 nonexclusive definitions of gambling problems (1 + to 4 + Diagnostic and Statistical Manual for Mental Disorders-Fifth Edition [DSM-5] Composite International Diagnostic Interview [CIDI] symptoms), different indicators of gambling involvement (across all game types; form-specific) and different timeframes (lifetime; last year). Logistic regressions applied in a test dataset (40%) to cross-validate the heuristics of probable low-risk gambling incorporated confounding covariates (age, gender, education, migration, and unemployment) and confirmed the strong concurrent validity of the thresholds. Moreover, it was possible to establish robust form-specific thresholds of low-risk gambling (only for gaming machines and poker). Possible implications for early detection of problem gamblers in offline or online environments are discussed. Results substantiate international knowledge about problem gambling prevention and contribute to a German discussion about empirically based guidelines of low-risk gambling. (c) 2015 APA, all rights reserved).
Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.
2016-01-01
Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
How much crosstalk can be allowed in a stereoscopic system at various grey levels?
NASA Astrophysics Data System (ADS)
Shestak, Sergey; Kim, Daesik; Kim, Yongie
2012-03-01
We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.
Riis, R G C; Gudbergsen, H; Simonsen, O; Henriksen, M; Al-Mashkur, N; Eld, M; Petersen, K K; Kubassova, O; Bay Jensen, A C; Damm, J; Bliddal, H; Arendt-Nielsen, L; Boesen, M
2017-02-01
To investigate the association between magnetic resonance imaging (MRI), macroscopic and histological assessments of synovitis in end-stage knee osteoarthritis (KOA). Synovitis of end-stage osteoarthritic knees was assessed using non-contrast-enhanced (CE), contrast-enhanced magnetic resonance imaging (CE-MRI) and dynamic contrast-enhanced (DCE)-MRI prior to (TKR) and correlated with microscopic and macroscopic assessments of synovitis obtained intraoperatively. Multiple bivariate correlations were used with a pre-specified threshold of 0.70 for significance. Also, multiple regression analyses with different subsets of MRI-variables as explanatory variables and the histology score as outcome variable were performed with the intention to find MRI-variables that best explain the variance in histological synovitis (i.e., highest R 2 ). A stepped approach was taken starting with basic characteristics and non-CE MRI-variables (model 1), after which CE-MRI-variables were added (model 2) with the final model also including DCE-MRI-variables (model 3). 39 patients (56.4% women, mean age 68 years, Kellgren-Lawrence (KL) grade 4) had complete MRI and histological data. Only the DCE-MRI variable MExNvoxel (surrogate of the volume and degree of synovitis) and the macroscopic score showed correlations above the pre-specified threshold for acceptance with histological inflammation. The maximum R 2 -value obtained in Model 1 was R 2 = 0.39. In Model 2, where the CE-MRI-variables were added, the highest R 2 = 0.52. In Model 3, a four-variable model consisting of the gender, one CE-MRI and two DCE-MRI-variables yielded a R 2 = 0.71. DCE-MRI is correlated with histological synovitis in end-stage KOA and the combination of CE and DCE-MRI may be a useful, non-invasive tool in characterising synovitis in KOA. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Sliding mode control of outbreaks of emerging infectious diseases.
Xiao, Yanni; Xu, Xiaxia; Tang, Sanyi
2012-10-01
This paper proposes and analyzes a mathematical model of an infectious disease system with a piecewise control function concerning threshold policy for disease management strategy. The proposed models extend the classic models by including a piecewise incidence rate to represent control or precautionary measures being triggered once the number of infected individuals exceeds a threshold level. The long-term behaviour of the proposed non-smooth system under this strategy consists of the so-called sliding motion-a very rapid switching between application and interruption of the control action. Model solutions ultimately approach either one of two endemic states for two structures or the sliding equilibrium on the switching surface, depending on the threshold level. Our findings suggest that proper combinations of threshold densities and control intensities based on threshold policy can either preclude outbreaks or lead the number of infected to a previously chosen level.
Erosive Burning Study Utilizing Ultrasonic Measurement Techniques
NASA Technical Reports Server (NTRS)
Furfaro, James A.
2003-01-01
A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.
Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J
2005-01-01
This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.
Performance or marketing benefits? The case of LEED certification.
Matisoff, Daniel C; Noonan, Douglas S; Mazzolini, Anna M
2014-01-01
Green building adoption is driven by both performance-based benefits and marketing based benefits. Performance based benefits are those that improve performance or lower operating costs of the building or of building users. Marketing benefits stem from the consumer response to green certification. This study illustrates the relative importance of the marketing based benefits that accrue to Leadership in Energy and Environmental Design (LEED) buildings due to green signaling mechanisms, specifically related to the certification itself are identified. Of course, all participants in the LEED certification scheme seek marketing benefits. But even among LEED participants, the interest in green signaling is pronounced. The green signaling mechanism that occurs at the certification thresholds shifts building patterns from just below to just above the threshold level, and motivates builders to cluster buildings just above each threshold. Results are consistent across subsamples, though nonprofit organizations appear to build greener buildings and engage in more green signaling than for-profit entities. Using nonparametric regression discontinuity, signaling across different building types is observed. Marketing benefits due to LEED certification drives organizations to build "greener" buildings by upgrading buildings at the thresholds to reach certification levels.
Evidence Accumulator or Decision Threshold – Which Cortical Mechanism are We Observing?
Simen, Patrick
2012-01-01
Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: (1) When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? (2) Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data. PMID:22737136
Prevalence of subclinical ketosis and relationships with postpartum diseases in European dairy cows.
Suthar, V S; Canelas-Raposo, J; Deniz, A; Heuwieser, W
2013-05-01
Subclinical ketosis (SCK) is defined as concentrations of β-hydroxybutyrate (BHBA) ≥ 1.2 to 1.4 mmol/L and it is considered a gateway condition for other metabolic and infectious disorders such as metritis, mastitis, clinical ketosis, and displaced abomasum. Reported prevalence rates range from 6.9 to 43% in the first 2 mo of lactation. However, there is a dearth of information on prevalence rates considering the diversity of European dairy farms. The objectives of this study were to (1) determine prevalence of SCK, (2) identify thresholds of BHBA, and (3) study their relationships with postpartum metritis, clinical ketosis, displaced abomasum, lameness, and mastitis in European dairy farms. From May to October 2011, a convenience sample of 528 dairy herds from Croatia, Germany, Hungary, Italy, Poland, Portugal, Serbia, Slovenia, Spain, and Turkey was studied. β-Hydroxybutyrate levels were measured in 5,884 cows with a handheld meter within 2 to 15 d in milk (DIM). On average, 11 cows were enrolled per farm and relevant information (e.g., DIM, postpartum diseases, herd size) was recorded. Using receiver operator characteristic curve analyses, blood BHBA thresholds were determined for the occurrence of metritis, mastitis, clinical ketosis, displaced abomasum, and lameness. Multivariate binary logistic regression models were built for each disease, considering cow as the experimental unit and herd as a random effect. Overall prevalence of SCK (i.e., blood BHBA ≥ 1.2 mmol/L) within 10 countries was 21.8%, ranging from 11.2 to 36.6%. Cows with SCK had 1.5, 9.5, and 5.0 times greater odds of developing metritis, clinical ketosis, and displaced abomasum, respectively. Multivariate binary logistic regression models demonstrated that cows with blood BHBA levels of ≥ 1.4, ≥ 1.1 and ≥ 1.7 mmol/L during 2 to 15 DIM had 1.7, 10.5, and 6.9 times greater odds of developing metritis, clinical ketosis, and displaced abomasum, respectively, compared with cows with lower BHBA blood levels. Interestingly, a postpartum blood BHBA threshold ≥ 1.1 mmol/L increased the odds for lameness in dairy cows 1.8 (95% confidence interval: 1.3 to 2.5) times. Overall, prevalence of SCK was high between 2 to 15 DIM and SCK increased the odds of metritis, clinical ketosis, lameness, and displaced abomasum in European dairy herds. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Nattee, Cholwich; Khamsemanan, Nirattaya; Lawtrakul, Luckhana; Toochinda, Pisanu; Hannongbua, Supa
2017-01-01
Malaria is still one of the most serious diseases in tropical regions. This is due in part to the high resistance against available drugs for the inhibition of parasites, Plasmodium, the cause of the disease. New potent compounds with high clinical utility are urgently needed. In this work, we created a novel model using a regression tree to study structure-activity relationships and predict the inhibition constant, K i of three different antimalarial analogues (Trimethoprim, Pyrimethamine, and Cycloguanil) based on their molecular descriptors. To the best of our knowledge, this work is the first attempt to study the structure-activity relationships of all three analogues combined. The most relevant descriptors and appropriate parameters of the regression tree are harvested using extremely randomized trees. These descriptors are water accessible surface area, Log of the aqueous solubility, total hydrophobic van der Waals surface area, and molecular refractivity. Out of all possible combinations of these selected parameters and descriptors, the tree with the strongest coefficient of determination is selected to be our prediction model. Predicted K i values from the proposed model show a strong coefficient of determination, R 2 =0.996, to experimental K i values. From the structure of the regression tree, compounds with high accessible surface area of all hydrophobic atoms (ASA_H) and low aqueous solubility of inhibitors (Log S) generally possess low K i values. Our prediction model can also be utilized as a screening test for new antimalarial drug compounds which may reduce the time and expenses for new drug development. New compounds with high predicted K i should be excluded from further drug development. It is also our inference that a threshold of ASA_H greater than 575.80 and Log S less than or equal to -4.36 is a sufficient condition for a new compound to possess a low K i . Copyright © 2016 Elsevier Inc. All rights reserved.
Jering, Monika Zdenka; Marolen, Khensani N; Shotwell, Matthew S; Denton, Jason N; Sandberg, Warren S; Ehrenfeld, Jesse Menachem
2015-11-01
The surgical Apgar score predicts major 30-day postoperative complications using data assessed at the end of surgery. We hypothesized that evaluating the surgical Apgar score continuously during surgery may identify patients at high risk for postoperative complications. We retrospectively identified general, vascular, and general oncology patients at Vanderbilt University Medical Center. Logistic regression methods were used to construct a series of predictive models in order to continuously estimate the risk of major postoperative complications, and to alert care providers during surgery should the risk exceed a given threshold. Area under the receiver operating characteristic curve (AUROC) was used to evaluate the discriminative ability of a model utilizing a continuously measured surgical Apgar score relative to models that use only preoperative clinical factors or continuously monitored individual constituents of the surgical Apgar score (i.e. heart rate, blood pressure, and blood loss). AUROC estimates were validated internally using a bootstrap method. 4,728 patients were included. Combining the ASA PS classification with continuously measured surgical Apgar score demonstrated improved discriminative ability (AUROC 0.80) in the pooled cohort compared to ASA (0.73) and the surgical Apgar score alone (0.74). To optimize the tradeoff between inadequate and excessive alerting with future real-time notifications, we recommend a threshold probability of 0.24. Continuous assessment of the surgical Apgar score is predictive for major postoperative complications. In the future, real-time notifications might allow for detection and mitigation of changes in a patient's accumulating risk of complications during a surgical procedure.
Wang, Zaimin; Hoy, Wendy E; Wang, Zhiqiang
2013-08-16
Albuminuria marks renal disease and cardiovascular risk. It was estimated to contribute 75% of the risk of all-cause natural death in one Aboriginal group. The urine albumin/creatinine ratio (ACR) is commonly used as an index of albuminuria. This study aims to examine the associations between demographic factors, anthropometric index, blood pressure, lipid-protein measurements and other biomarkers and albuminuria in a cross-sectional study in a high-risk Australian Aboriginal population. The models will be evaluated for albuminuria at or above the microalbuminuria threshold, and at or above the "overt albuminuria" threshold with the potential to distinguish associations they have in common and those that differ. This was a cross-sectional study of 598 adults aged 18-76 years. All participants were grouped into quartiles by age. Logistic regression models were used to explore the correlates of ACR categories. The significant correlates were systolic blood pressure (SBP), C-reactive protein (CRP), uric acid, diabetes, gamma-glutamyl transferase (GGT) (marginally significant, p=0.054) and serum albumin (negative association) for ACR 17+ (mg/g) for men and 25+ for women. Independent correlates were SBP, uric acid, diabetes, total cholesterol, alanine amino transferase (ALT), Cystatin C and serum albumin (negative association) for overt albuminuria; and SBP, CRP and serum albumin only for microalbuminuria. This is the most detailed modelling of pathologic albuminuria in this setting to date. The somewhat variable association with risk factors suggests that microalbuminuria and overt albuminuria might reflect different as well as shared phenomena.
A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
2012-01-01
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
Vandenberg, Brian; Sharma, Anurag
2016-07-01
To compare estimated effects of two policy alternatives, (i) a minimum unit price (MUP) for alcohol and (ii) specific (per-unit) taxation, upon current product prices, per capita spending (A$), and per capita consumption by income quintile, consumption quintile and product type. Estimation of baseline spending and consumption, and modelling policy-to-price and price-to-consumption effects of policy changes using scanner data from a panel of demographically representative Australian households that includes product-level details of their off-trade alcohol spending (n = 885; total observations = 12,505). Robustness checks include alternative price elasticities, tax rates, minimum price thresholds and tax pass-through rates. Current alcohol taxes and alternative taxation and pricing policies are not highly regressive. Any regressive effects are small and concentrated among heavy consumers. The lowest-income consumers currently spend a larger proportion of income (2.3%) on alcohol taxes than the highest-income consumers (0.3%), but the mean amount is small in magnitude [A$5.50 per week (95%CI: 5.18-5.88)]. Both a MUP and specific taxation will have some regressive effects, but the effects are limited, as they are greatest for the heaviest consumers, irrespective of income. Among the policy alternatives, a MUP is more effective in reducing consumption than specific taxation, especially for consumers in the lowest-income quintile: an estimated mean per capita reduction of 11.9 standard drinks per week (95%CI: 11.3-12.6). Policies that increase the cost of the cheapest alcohol can be effective in reducing alcohol consumption, without having highly regressive effects. © The Author 2015. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Multivariate Analyses of Balance Test Performance, Vestibular Thresholds, and Age
Karmali, Faisal; Bermúdez Rey, María Carolina; Clark, Torin K.; Wang, Wei; Merfeld, Daniel M.
2017-01-01
We previously published vestibular perceptual thresholds and performance in the Modified Romberg Test of Standing Balance in 105 healthy humans ranging from ages 18 to 80 (1). Self-motion thresholds in the dark included roll tilt about an earth-horizontal axis at 0.2 and 1 Hz, yaw rotation about an earth-vertical axis at 1 Hz, y-translation (interaural/lateral) at 1 Hz, and z-translation (vertical) at 1 Hz. In this study, we focus on multiple variable analyses not reported in the earlier study. Specifically, we investigate correlations (1) among the five thresholds measured and (2) between thresholds, age, and the chance of failing condition 4 of the balance test, which increases vestibular reliance by having subjects stand on foam with eyes closed. We found moderate correlations (0.30–0.51) between vestibular thresholds for different motions, both before and after using our published aging regression to remove age effects. We found that lower or higher thresholds across all threshold measures are an individual trait that account for about 60% of the variation in the population. This can be further distributed into two components with about 20% of the variation explained by aging and 40% of variation explained by a single principal component that includes similar contributions from all threshold measures. When only roll tilt 0.2 Hz thresholds and age were analyzed together, we found that the chance of failing condition 4 depends significantly on both (p = 0.006 and p = 0.013, respectively). An analysis incorporating more variables found that the chance of failing condition 4 depended significantly only on roll tilt 0.2 Hz thresholds (p = 0.046) and not age (p = 0.10), sex nor any of the other four threshold measures, suggesting that some of the age effect might be captured by the fact that vestibular thresholds increase with age. For example, at 60 years of age, the chance of failing is roughly 5% for the lowest roll tilt thresholds in our population, but this increases to 80% for the highest roll tilt thresholds. These findings demonstrate the importance of roll tilt vestibular cues for balance, even in individuals reporting no vestibular symptoms and with no evidence of vestibular dysfunction. PMID:29167656
Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos
2018-04-15
Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.
Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A
2010-10-11
We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.
Slopen, Natalie; Loucks, Eric B; Appleton, Allison A; Kawachi, Ichiro; Kubzansky, Laura D; Non, Amy L; Buka, Stephen; Gilman, Stephen E
2015-01-01
Children exposed to social adversity carry a greater risk of poor physical and mental health into adulthood. This increased risk is thought to be due, in part, to inflammatory processes associated with early adversity that contribute to the etiology of many adult illnesses. The current study asks whether aspects of the prenatal social environment are associated with levels of inflammation in adulthood, and whether prenatal and childhood adversity both contribute to adult inflammation. We examined associations of prenatal and childhood adversity assessed through direct interviews of participants in the Collaborative Perinatal Project between 1959 and 1974 with blood levels of C-reactive protein in 355 offspring interviewed in adulthood (mean age=42.2 years). Linear and quantile regression models were used to estimate the effects of prenatal adversity and childhood adversity on adult inflammation, adjusting for age, sex, and race and other potential confounders. In separate linear regression models, high levels of prenatal and childhood adversity were associated with higher CRP in adulthood. When prenatal and childhood adversity were analyzed together, our results support the presence of an effect of prenatal adversity on (log) CRP level in adulthood (β=0.73, 95% CI: 0.26, 1.20) that is independent of childhood adversity and potential confounding factors including maternal health conditions reported during pregnancy. Supplemental analyses revealed similar findings using quantile regression models and logistic regression models that used a clinically-relevant CRP threshold (>3mg/L). In a fully-adjusted model that included childhood adversity, high prenatal adversity was associated with a 3-fold elevated odds (95% CI: 1.15, 8.02) of having a CRP level in adulthood that indicates high risk of cardiovascular disease. Social adversity during the prenatal period is a risk factor for elevated inflammation in adulthood independent of adversities during childhood. This evidence is consistent with studies demonstrating that adverse exposures in the maternal environment during gestation have lasting effects on development of the immune system. If these results reflect causal associations, they suggest that interventions to improve the social and environmental conditions of pregnancy would promote health over the life course. It remains necessary to identify the mechanisms that link maternal conditions during pregnancy to the development of fetal immune and other systems involved in adaptation to environmental stressors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Heterogeneous Effects of Fructose on Blood Lipids in Individuals With Type 2 Diabetes
Sievenpiper, John L.; Carleton, Amanda J.; Chatha, Sheena; Jiang, Henry Y.; de Souza, Russell J.; Beyene, Joseph; Kendall, Cyril W.C.; Jenkins, David J.A.
2009-01-01
OBJECTIVE Because of blood lipid concerns, diabetes associations discourage fructose at high intakes. To quantify the effect of fructose on blood lipids in diabetes, we conducted a systematic review and meta-analysis of experimental clinical trials investigating the effect of isocaloric fructose exchange for carbohydrate on triglycerides, total cholesterol, LDL cholesterol, and HDL cholesterol in type 1 and 2 diabetes. RESEARCH DESIGN AND METHODS We searched MEDLINE, EMBASE, CINAHL, and the Cochrane Library for relevant trials of ≥7 days. Data were pooled by the generic inverse variance method and expressed as standardized mean differences with 95% CI. Heterogeneity was assessed by χ2 tests and quantified by I2. Meta-regression models identified dose threshold and independent predictors of effects. RESULTS Sixteen trials (236 subjects) met the eligibility criteria. Isocaloric fructose exchange for carbohydrate raised triglycerides and lowered total cholesterol under specific conditions without affecting LDL cholesterol or HDL cholesterol. A triglyceride-raising effect without heterogeneity was seen only in type 2 diabetes when the reference carbohydrate was starch (mean difference 0.24 [95% CI 0.05–0.44]), dose was >60 g/day (0.18 [0.00–0.37]), or follow-up was ≤4 weeks (0.18 [0.00–0.35]). Piecewise meta-regression confirmed a dose threshold of 60 g/day (R2 = 0.13)/10% energy (R2 = 0.36). A total cholesterol–lowering effect without heterogeneity was seen only in type 2 diabetes under the following conditions: no randomization and poor study quality (−0.19 [−0.34 to −0.05]), dietary fat >30% energy (−0.33 [−0.52 to −0.15]), or crystalline fructose (−0.28 [−0.47 to −0.09]). Multivariate meta-regression analyses were largely in agreement. CONCLUSIONS Pooled analyses demonstrated conditional triglyceride-raising and total cholesterol–lowering effects of isocaloric fructose exchange for carbohydrate in type 2 diabetes. Recommendations and large-scale future trials need to address the heterogeneity in the data. PMID:19592634
Linking patient satisfaction with nursing care: the case of care rationing - a correlational study.
Papastavrou, Evridiki; Andreou, Panayiota; Tsangari, Haritini; Merkouris, Anastasios
2014-01-01
Implicit rationing of nursing care is the withholding of or failure to carry out all necessary nursing measures due to lack of resources. There is evidence supporting a link between rationing of nursing care, nurses' perceptions of their professional environment, negative patient outcomes, and placing patient safety at risk. The aims of the study were: a) To explore whether patient satisfaction is linked to nurse-reported rationing of nursing care and to nurses' perceptions of their practice environment while adjusting for patient and nurse characteristics. b) To identify the threshold score of rationing by comparing the level of patient satisfaction factors across rationing levels. A descriptive, correlational design was employed. Participants in this study included 352 patients and 318 nurses from ten medical and surgical units of five general hospitals. Three measurement instruments were used: the BERNCA scale for rationing of care, the RPPE scale to explore nurses' perceptions of their work environment and the Patient Satisfaction scale to assess the level of patient satisfaction with nursing care. The statistical analysis included the use of Kendall's correlation coefficient to explore a possible relationship between the variables and multiple regression analysis to assess the effects of implicit rationing of nursing care together with organizational characteristics on patient satisfaction. The mean score of implicit rationing of nursing care was 0.83 (SD = 0.52, range = 0-3), the overall mean of RPPE was 2.76 (SD = 0.32, range = 1.28 - 3.69) and the two scales were significantly correlated (τ = -0.234, p < 0.001). The regression analysis showed that care rationing and work environment were related to patient satisfaction, even after controlling for nurse and patient characteristics. The results from the adjusted regression models showed that even at the lowest level of rationing (i.e. 0.5) patients indicated low satisfaction. The results support the relationships between organizational and environmental variables, care rationing and patient satisfaction. The identification of thresholds at which rationing starts to influence patient outcomes in a negative way may allow nurse managers to introduce interventions so as to keep rationing at a level at which patient safety is not jeopardized.
NASA Astrophysics Data System (ADS)
Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo
2014-05-01
This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.
Time series regression-based pairs trading in the Korean equities market
NASA Astrophysics Data System (ADS)
Kim, Saejoon; Heo, Jun
2017-07-01
Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Walsh, Colin G; Sharman, Kavya; Hripcsak, George
2017-12-01
Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.
Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M
2017-06-30
There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.
Neurophysiological correlates of abnormal somatosensory temporal discrimination in dystonia.
Antelmi, Elena; Erro, Roberto; Rocchi, Lorenzo; Liguori, Rocco; Tinazzi, Michele; Di Stasio, Flavio; Berardelli, Alfredo; Rothwell, John C; Bhatia, Kailash P
2017-01-01
Somatosensory temporal discrimination threshold is often prolonged in patients with dystonia. Previous evidence suggested that this might be caused by impaired somatosensory processing in the time domain. Here, we tested if other markers of reduced inhibition in the somatosensory system might also contribute to abnormal somatosensory temporal discrimination in dystonia. Somatosensory temporal discrimination threshold was measured in 19 patients with isolated cervical dystonia and 19 age-matched healthy controls. We evaluated temporal somatosensory inhibition using paired-pulse somatosensory evoked potentials, spatial somatosensory inhibition by measuring the somatosensory evoked potentials interaction between simultaneous stimulation of the digital nerves in thumb and index finger, and Gamma-aminobutyric acid-ergic (GABAergic) sensory inhibition using the early and late components of high-frequency oscillations in digital nerves somatosensory evoked potentials. When compared with healthy controls, dystonic patients had longer somatosensory temporal discrimination thresholds, reduced suppression of cortical and subcortical paired-pulse somatosensory evoked potentials, less spatial inhibition of simultaneous somatosensory evoked potentials, and a smaller area of the early component of the high-frequency oscillations. A logistic regression analysis found that paired pulse suppression of the N20 component at an interstimulus interval of 5 milliseconds and the late component of the high-frequency oscillations were independently related to somatosensory temporal discrimination thresholds. "Dystonia group" was also a predictor of enhanced somatosensory temporal discrimination threshold, indicating a dystonia-specific effect that independently influences this threshold. Increased somatosensory temporal discrimination threshold in dystonia is related to reduced activity of inhibitory circuits within the primary somatosensory cortex. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression
NASA Astrophysics Data System (ADS)
Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S.
2017-11-01
We address the prediction of low-visibility events at airports using machine-learning regression. The proposed model successfully forecasts low-visibility events in terms of the runway visual range at the airport, with the use of support-vector regression, neural networks (multi-layer perceptrons and extreme-learning machines) and Gaussian-process algorithms. We assess the performance of these algorithms based on real data collected at the Valladolid airport, Spain. We also propose a study of the atmospheric variables measured at a nearby tower related to low-visibility atmospheric conditions, since they are considered as the inputs of the different regressors. A pre-processing procedure of these input variables with wavelet transforms is also described. The results show that the proposed machine-learning algorithms are able to predict low-visibility events well. The Gaussian process is the best algorithm among those analyzed, obtaining over 98% of the correct classification rate in low-visibility events when the runway visual range is {>}1000 m, and about 80% under this threshold. The performance of all the machine-learning algorithms tested is clearly affected in extreme low-visibility conditions ({<}500 m). However, we show improved results of all the methods when data from a neighbouring meteorological tower are included, and also with a pre-processing scheme using a wavelet transform. Also presented are results of the algorithm performance in daytime and nighttime conditions, and for different prediction time horizons.
Assessing the potential for improving S2S forecast skill through multimodel ensembling
NASA Astrophysics Data System (ADS)
Vigaud, N.; Robertson, A. W.; Tippett, M. K.; Wang, L.; Bell, M. J.
2016-12-01
Non-linear logistic regression is well suited to probability forecasting and has been successfully applied in the past to ensemble weather and climate predictions, providing access to the full probabilities distribution without any Gaussian assumption. However, little work has been done at sub-monthly lead times where relatively small re-forecast ensembles and lengths represent new challenges for which post-processing avenues have yet to be investigated. A promising approach consists in extending the definition of non-linear logistic regression by including the quantile of the forecast distribution as one of the predictors. So-called Extended Logistic Regression (ELR), which enables mutually consistent individual threshold probabilities, is here applied to ECMWF, CFSv2 and CMA re-forecasts from the S2S database in order to produce rainfall probabilities at weekly resolution. The ELR model is trained on seasonally-varying tercile categories computed for lead times of 1 to 4 weeks. It is then tested in a cross-validated manner, i.e. allowing real-time predictability applications, to produce rainfall tercile probabilities from individual weekly hindcasts that are finally combined by equal pooling. Results will be discussed over a broader North American region, where individual and MME forecasts generated out to 4 weeks lead are characterized by good probabilistic reliability but low sharpness, exhibiting systematically more skill in winter than summer.
Ospina, P A; Nydam, D V; Stokol, T; Overton, T R
2010-02-01
The objectives of this study were to 1) establish cow-level critical thresholds for serum concentrations of nonesterified fatty acids (NEFA) and beta-hydroxybutyrate (BHBA) to predict periparturient diseases [displaced abomasa (DA), clinical ketosis (CK), metritis and retained placenta, or any of these three], and 2) investigate the magnitude of the metabolites' association with these diseases within 30 d in milk. In a prospective cohort study of 100 freestall, total mixed ration-fed herds in the northeastern United States, blood samples were collected from approximately 15 prepartum and 15 different postpartum transition animals in each herd, for a total of 2,758 samples. Serum NEFA concentrations were measured in the prepartum group, and both NEFA and BHBA were measured in the postpartum group. The critical thresholds for NEFA or BHBA were evaluated with receiver operator characteristic analysis for all diseases in both cohorts. The risk ratios (RR) of a disease outcome given NEFA or BHBA concentrations and other covariates were modeled with multivariable regression techniques, accounting for clustering of cows within herds. The NEFA critical threshold that predicted any of the 3 diseases in the prepartum cohort was 0.29mEq/L and in the postpartum cohort was 0.57mEq/L. The critical threshold for serum BHBA in the postpartum cohort was 10mg/dL, which predicted any of the 3 diseases. All RR with NEFA as a predictor of disease were >1.8; however, RR were greatest in animals sampled postpartum (e.g., RR for DA=9.7; 95% CI=4.2 to 22.4. All RR with BHBA as the predictor of disease were >2.3 (e.g., RR for DA=6.9; 95% CI=3.7 to 12.9). Although prepartum NEFA and postpartum BHBA were both significantly associated with development of clinical disease, postpartum serum NEFA concentration was most associated with the risk of developing DA, CK, metritis, or retained placenta during the first 30 d in milk. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Polinski, Jennifer M; Shrank, William H; Huskamp, Haiden A; Glynn, Robert J; Liberman, Joshua N; Schneeweiss, Sebastian
2011-08-01
Nations are struggling to expand access to essential medications while curbing rising health and drug spending. While the US government's Medicare Part D drug insurance benefit expanded elderly citizens' access to drugs, it also includes a controversial period called the "coverage gap" during which beneficiaries are fully responsible for drug costs. We examined the impact of entering the coverage gap on drug discontinuation, switching to another drug for the same indication, and drug adherence. While increased discontinuation of and adherence to essential medications is a regrettable response, increased switching to less expensive but therapeutically interchangeable medications is a positive response to minimize costs. We followed 663,850 Medicare beneficiaries enrolled in Part D or retiree drug plans with prescription and health claims in 2006 and/or 2007 to determine who reached the gap spending threshold, n = 217,131 (33%). In multivariate Cox proportional hazards models, we compared drug discontinuation and switching rates in selected drug classes after reaching the threshold between all 1,993 who had no financial assistance during the coverage gap (exposed) versus 9,965 multivariate propensity score-matched comparators with financial assistance (unexposed). Multivariate logistic regressions compared drug adherence (≤ 80% versus >80% of days covered). Beneficiaries reached the gap spending threshold on average 222 d ±79. At the drug level, exposed beneficiaries were twice as likely to discontinue (hazard ratio [HR] = 2.00, 95% confidence interval [CI] 1.64-2.43) but less likely to switch a drug (HR = 0.60, 0.46-0.78) after reaching the threshold. Gap-exposed beneficiaries were slightly more likely to have reduced adherence (OR = 1.07, 0.98-1.18). A lack of financial assistance after reaching the gap spending threshold was associated with a doubling in discontinuing essential medications but not switching drugs in 2006 and 2007. Blunt cost-containment features such as the coverage gap have an adverse impact on drug utilization that may conceivably affect health outcomes.
Murphy, Colin T; Galloway, Thomas J; Handorf, Elizabeth A; Egleston, Brian L; Wang, Lora S; Mehra, Ranee; Flieder, Douglas B; Ridge, John A
2016-01-10
To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. © 2015 by American Society of Clinical Oncology.
Murphy, Colin T.; Handorf, Elizabeth A.; Egleston, Brian L.; Wang, Lora S.; Mehra, Ranee; Flieder, Douglas B.; Ridge, John A.
2016-01-01
Purpose To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Methods Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. Results A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. Conclusion TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. PMID:26628469
Xu, Yifang; Collins, Leslie M
2004-04-01
The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.
Evaluation of Pressure Pain Threshold as a Measure of Perceived Stress and High Job Strain.
Hven, Lisbeth; Frost, Poul; Bonde, Jens Peter Ellekilde
2017-01-01
To investigate whether pressure pain threshold (PPT), determined by pressure algometry, can be used as an objective measure of perceived stress and job strain. We used cross-sectional base line data collected during 1994 to 1995 within the Project on Research and Intervention in Monotonous work (PRIM), which included 3123 employees from a variety of Danish companies. Questionnaire data included 18 items on stress symptoms, 23 items from the Karasek scale on job strain, and information on discomfort in specified anatomical regions was also collected. Clinical examinations included pressure pain algometry measurements of PPT on the trapezius and supraspinatus muscles and the tibia. Associations of stress symptoms and job strain with PPT of each site was analyzed for men and women separately with adjustment for age body mass index, and discomfort in the anatomical region closest to the point of pressure algometry using multivariable linear regression. We found significant inverse associations between perceived stress and PPT in both genders in models adjusting for age and body mass index: the higher level of perceived stress, the lower the threshold. For job strain, associations were weaker and only present in men. In men all associations were attenuated when adjusting for reported discomfort in regions close to the site of pressure algometry. The distributions of PPT among stressed and non-stressed persons were strongly overlapping. Despite significant associations between perceived stress and PPT, the discriminative capability of PPT to distinguish individuals with and without stress is low. PPT measured by pressure algometry seems not applicable as a diagnostic tool of a state of mental stress.
Tromboni, F; Dodds, W K
2017-07-01
Nutrient enrichment in streams due to land use is increasing globally, reducing water quality and causing eutrophication of downstream fresh and coastal waters. In temperate developed countries, the intensive use of fertilizers in agriculture is a main driver of increasing nutrient concentrations, but high levels and fast rates of urbanization can be a predominant issue in some areas of the developing world. We investigated land use in the highly urbanized tropical State of Rio de Janeiro, Brazil. We collected total nitrogen, total phosphorus, and inorganic nutrient data from 35 independent watersheds distributed across the State and characterized land use at a riparian and entire watershed scales upstream from each sample station, using ArcGIS. We used regression models to explain land use influences on nutrient concentrations and to assess riparian protection relationships to water quality. We found that urban land use was the primary driver of nutrient concentration increases, independent of the scale of analyses and that urban land use was more concentrated in the riparian buffer of streams than in the entire watersheds. We also found significant thresholds that indicated strong increases in nutrient concentrations with modest increases in urbanization reaching maximum nutrient concentrations between 10 and 46% urban cover. These thresholds influenced calculation of reference nutrient concentrations, and ignoring them led to higher estimates of these concentrations. Lack of sewage treatment in concert with urban development in riparian zones apparently leads to the observation that modest increases in urban land use can cause large increases in nutrient concentrations.
Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas
2018-04-24
Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
The threshold of a stochastic delayed SIR epidemic model with temporary immunity
NASA Astrophysics Data System (ADS)
Liu, Qun; Chen, Qingmei; Jiang, Daqing
2016-05-01
This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.
Denis, Cécile; Fatséas, Mélina; Auriacombe, Marc
2012-04-01
The DSM-5 Substance-Related Disorders Work Group proposed to include Pathological Gambling within the current Substance-Related Disorders section. The objective of the current report was to assess four possible sets of diagnostic criteria for Pathological Gambling. Gamblers (N=161) were defined as either Pathological or Non-Pathological according to four classification methods. (a) Option 1: the current DSM-IV criteria for Pathological Gambling; (b) Option 2: dropping the "Illegal Acts" criterion, while keeping the threshold at 5 required criteria endorsed; (c) Option 3: the proposed DSM-5 approach, i.e., deleting "Illegal Acts" and lowering the threshold of required criteria from 5 to 4; (d) Option 4: to use a set of Pathological Gambling criteria modeled on the DSM-IV Substance Dependence criteria. Cronbach's alpha and eigenvalues were calculated for reliability, Phi, discriminant function analyses, correlations and multivariate regression models were performed for validity and kappa coefficients were calculated for diagnostic consistency of each option. All criteria sets were reliable and valid. Some criteria had higher discriminant properties than others. The proposed DSM-5 criteria in Options 2 and 3 performed well and did not appear to alter the meanings of the diagnoses of Pathological Gambling from DSM-IV. Future work should further explore if Pathological Gambling might be assessed using the same criteria as those used for Substance Use Disorders. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Windisch, Stephanie; Seiberl, Wolfgang; Schwirtz, Ansgar; Hahn, Daniel
2017-01-01
The aim of this study was to quantify the physical demands of a simulated firefighting circuit and to establish the relationship between job performance and endurance and strength fitness measurements. On four separate days 41 professional firefighters (39 ± 9 yr, 179.6 ± 2.3 cm, 84.4 ± 9.2 kg, BMI 26.1 ± 2.8 kg/m2) performed treadmill testing, fitness testing (strength, balance and flexibility) and a simulated firefighting exercise. The firefighting exercise included ladder climbing (20 m), treadmill walking (200 m), pulling a wire rope hoist (15 times) and crawling an orientation section (50 m). Firefighting performance during the simulated exercise was evaluated by a simple time-strain-air depletion model (TSA) taking the sum of z-transformed parameters of time to finish the exercise, strain in terms of mean heart rate, and air depletion from the breathing apparatus. Multiple regression analysis based on the TSA-model served for the identification of the physiological determinants most relevant for professional firefighting. Three main factors with great influence on firefighting performance were identified (70.1% of total explained variance): VO2peak, the time firefighter exercised below their individual ventilatory threshold and mean breathing frequency. Based on the identified main factors influencing firefighting performance we recommend a periodic preventive health screening for incumbents to monitor peak VO2 and individual ventilatory threshold. PMID:28303944
The Dubna-Mainz-Taipei Dynamical Model for πN Scattering and π Electromagnetic Production
NASA Astrophysics Data System (ADS)
Yang, Shin Nan
Some of the featured results of the Dubna-Mainz-Taipei (DMT) dynamical model for πN scattering and π0 electromagnetic production are summarized. These include results for threshold π0 production, deformation of Δ(1232),and the extracted properties of higher resonances below 2 GeV. The excellent agreement of DMT model's predictions with threshold π0 production data, including the recent precision measurements from MAMI establishes results of DMT model as a benchmark for experimentalists and theorists in dealing with threshold pion production.
Yang, Kun; Yu, Zhenyu; Luo, Yi; Yang, Yang; Zhao, Lei; Zhou, Xiaolu
2018-05-15
Global warming and rapid urbanization in China have caused a series of ecological problems. One consequence has involved the degradation of lake water environments. Lake surface water temperatures (LSWTs) significantly shape water ecological environments and are highly correlated with the watershed ecosystem features and biodiversity levels. Analysing and predicting spatiotemporal changes in LSWT and exploring the corresponding impacts on water quality is essential for controlling and improving the ecological water environment of watersheds. In this study, Dianchi Lake was examined through an analysis of 54 water quality indicators from 10 water quality monitoring sites from 2005 to 2016. Support vector regression (SVR), Principal Component Analysis (PCA) and Back Propagation Artificial Neural Network (BPANN) methods were applied to form a hybrid forecasting model. A geospatial analysis was conducted to observe historical LSWTs and water quality changes for Dianchi Lake from 2005 to 2016. Based on the constructed model, LSWTs and changes in water quality were simulated for 2017 to 2020. The relationship between LSWTs and water quality thresholds was studied. The results show limited errors and highly generalized levels of predictive performance. In addition, a spatial visualization analysis shows that from 2005 to 2020, the chlorophyll-a (Chla), chemical oxygen demand (COD) and total nitrogen (TN) diffused from north to south and that ammonia nitrogen (NH 3 -N) and total phosphorus (TP) levels are increases in the northern part of Dianchi Lake, where the LSWT levels exceed 17°C. The LSWT threshold is 17.6-18.53°C, which falls within the threshold for nutritional water quality, but COD and TN levels fall below V class water quality standards. Transparency (Trans), COD, biochemical oxygen demand (BOD) and Chla levels present a close relationship with LSWT, and LSWTs are found to fundamentally affect lake cyanobacterial blooms. Copyright © 2017 Elsevier B.V. All rights reserved.
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.
Pressure pain thresholds and musculoskeletal morbidity in automobile manufacturing workers.
Gold, Judith E; Punnett, Laura; Katz, Jeffrey N
2006-02-01
Reduced pressure pain thresholds (PPTs) have been reported in occupational groups with symptoms of upper extremity musculoskeletal disorders (UEMSDs). The purpose of this study was to determine whether automobile manufacturing workers (n=460) with signs and symptoms of UEMSDs had reduced PPTs (greater sensitivity to pain through pressure applied to the skin) when compared with unaffected members of the cohort, which served as the reference group. The association of PPTs with symptom severity and localization of PE findings was investigated, as was the hypothesis that reduced thresholds would be found on the affected side in those with unilateral physical examination (PE) findings. PPTs were measured during the workday at 12 upper extremity sites. A PE for signs of UEMSDs and symptom questionnaire was administered. After comparison of potential covariates using t tests, linear regression multivariable models were constructed with the average of 12 sites (avgPPT) as the outcome. Subjects with PE findings and/or symptoms had a statistically significant lower avgPPT than non-cases. AvgPPT was reduced in those with more widespread PE findings and in those with greater symptom severity (test for trend, P=0.05). No difference between side-specific avgPPT was found in those with unilateral PE findings. Reduced PPTs were associated with female gender, increasing age, and grip strength below the gender-adjusted mean. After adjusting for the above confounders, avgPPT was associated with muscle/tendon PE findings and symptom severity in multivariable models. PPTs were associated with signs and symptoms of UEMSDs, after adjusting for gender, age and grip strength. The utility of this noninvasive testing modality should be assessed on the basis of prospective large cohort studies to determine if low PPTs are predictive of UEMSDs in asymptomatic individuals or of progression and spread of UEMSDs from localized to more diffuse disorders.
Development and Current Status of the “Cambridge” Loudness Models
2014-01-01
This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
NASA Astrophysics Data System (ADS)
Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris
2013-04-01
Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Dantrolene Reduces the Threshold and Gain for Shivering
Lin, Chun-Ming; Neeru, Sharma; Doufas, Anthony G.; Liem, Edwin; Shah, Yunus Muneer; Wadhwa, Anupama; Lenhardt, Rainer; Bjorksten, Andrew; Kurz, Andrea
2005-01-01
Dantrolene is used for treatment of life-threatening hyperthermia, yet its thermoregulatory effects are unknown. We tested the hypothesis that dantrolene reduces the threshold (triggering core temperature) and gain (incremental increase) of shivering. With IRB approval and informed consent, healthy volunteers were evaluated on two random days: control and dantrolene (≈2.5 mg/kg plus a continuous infusion). In study 1, 9 men were warmed until sweating was provoked and then cooled until arterio-venous shunt constriction and shivering occurred. Sweating was quantified on the chest using a ventilated capsule. Absolute right middle fingertip blood flow was quantified using venous-occlusion volume plethysmography. A sustained increase in oxygen consumption identified the shivering threshold. In study 2, 9 men were given cold Ringer's solution IV to reduce core temperature ≈2°C/h. Cooling was stopped when shivering intensity no longer increased with further core cooling. The gain of shivering was the slope of oxygen consumption vs. core temperature regression. In Study 1, sweating and vasoconstriction thresholds were similar on both days. In contrast, shivering threshold decreased 0.3±0.3°C, P=0.004, on the dantrolene day. In Study 2, dantrolene decreased the shivering threshold from 36.7±0.2 to 36.3±0.3°C, P=0.01 and systemic gain from 353±144 to 211±93 ml·min−1·°C−1, P=0.02. Thus, dantrolene substantially decreased the gain of shivering, but produced little central thermoregulatory inhibition. PMID:15105208
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Pang; Yu, Yue
2017-05-01
This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..
Yildizoglu, Tugce; Weislogel, Jan-Marek; Mohammad, Farhan; Chan, Edwin S-Y; Assam, Pryseley N; Claridge-Chang, Adam
2015-12-01
Genetic studies in Drosophila reveal that olfactory memory relies on a brain structure called the mushroom body. The mainstream view is that each of the three lobes of the mushroom body play specialized roles in short-term aversive olfactory memory, but a number of studies have made divergent conclusions based on their varying experimental findings. Like many fields, neurogenetics uses null hypothesis significance testing for data analysis. Critics of significance testing claim that this method promotes discrepancies by using arbitrary thresholds (α) to apply reject/accept dichotomies to continuous data, which is not reflective of the biological reality of quantitative phenotypes. We explored using estimation statistics, an alternative data analysis framework, to examine published fly short-term memory data. Systematic review was used to identify behavioral experiments examining the physiological basis of olfactory memory and meta-analytic approaches were applied to assess the role of lobular specialization. Multivariate meta-regression models revealed that short-term memory lobular specialization is not supported by the data; it identified the cellular extent of a transgenic driver as the major predictor of its effect on short-term memory. These findings demonstrate that effect sizes, meta-analysis, meta-regression, hierarchical models and estimation methods in general can be successfully harnessed to identify knowledge gaps, synthesize divergent results, accommodate heterogeneous experimental design and quantify genetic mechanisms.
Hospital Volume and 30-Day Mortality for Three Common Medical Conditions
Ross, Joseph S.; Normand, Sharon-Lise T.; Wang, Yun; Ko, Dennis T.; Chen, Jersey; Drye, Elizabeth E.; Keenan, Patricia S.; Lichtman, Judith H.; Bueno, Héctor; Schreiner, Geoffrey C.; Krumholz, Harlan M.
2010-01-01
Background The association between hospital volume and the death rate for patients who are hospitalized for acute myocardial infarction, heart failure, or pneumonia remains unclear. It is also not known whether a volume threshold for such an association exists. Methods We conducted cross-sectional analyses of data from Medicare administrative claims for all fee-for-service beneficiaries who were hospitalized between 2004 and 2006 in acute care hospitals in the United States for acute myocardial infarction, heart failure, or pneumonia. Using hierarchical logistic-regression models for each condition, we estimated the change in the odds of death within 30 days associated with an increase of 100 patients in the annual hospital volume. Analyses were adjusted for patients’ risk factors and hospital characteristics. Bootstrapping procedures were used to estimate 95% confidence intervals to identify the condition-specific volume thresholds above which an increased volume was not associated with reduced mortality. Results There were 734,972 hospitalizations for acute myocardial infarction in 4128 hospitals, 1,324,287 for heart failure in 4679 hospitals, and 1,418,252 for pneumonia in 4673 hospitals. An increased hospital volume was associated with reduced 30-day mortality for all conditions (P<0.001 for all comparisons). For each condition, the association between volume and outcome was attenuated as the hospital's volume increased. For acute myocardial infarction, once the annual volume reached 610 patients (95% confidence interval [CI], 539 to 679), an increase in the hospital volume by 100 patients was no longer significantly associated with reduced odds of death. The volume threshold was 500 patients (95% CI, 433 to 566) for heart failure and 210 patients (95% CI, 142 to 284) for pneumonia. Conclusions Admission to higher-volume hospitals was associated with a reduction in mortality for acute myocardial infarction, heart failure, and pneumonia, although there was a volume threshold above which an increased condition-specific hospital volume was no longer significantly associated with reduced mortality. PMID:20335587
Ryan, Andrew; Sutton, Matthew; Doran, Tim
2014-01-01
Objective To test whether receiving a financial bonus for quality in the Premier Hospital Quality Incentive Demonstration (HQID) stimulated subsequent quality improvement. Data Hospital-level data on process-of-care quality from Hospital Compare for the treatment of acute myocardial infarction (AMI), heart failure, and pneumonia for 260 hospitals participating in the HQID from 2004 to 2006; receipt of quality bonuses in the first 3 years of HQID from the Premier Inc. website; and hospital characteristics from the 2005 American Hospital Association Annual Survey. Study Design Under the HQID, hospitals received a 1 percent bonus on Medicare payments for scoring between the 80th and 90th percentiles on a composite quality measure, and a 2 percent bonus for scoring at the 90th percentile or above. We used a regression discontinuity design to evaluate whether hospitals with quality scores just above these payment thresholds improved more in the subsequent year than hospitals with quality scores just below the thresholds. In alternative specifications, we examined samples of hospitals scoring within 3, 5, and 10 percentage point “bandwidths” of the thresholds. We used a Generalized Linear Model to estimate whether the relationship between quality and lagged quality was discontinuous at the lagged thresholds required for quality bonuses. Principal Findings There were no statistically significant associations between receipt of a bonus and subsequent quality performance, with the exception of the 2 percent bonus for AMI in 2006 using the 5 percentage point bandwidth (0.8 percentage point increase, p < .01), and the 1 percent bonus for pneumonia in 2005 using all bandwidths (3.7 percentage point increase using the 3 percentage point bandwidth, p < .05). Conclusions We found little evidence that hospitals' receipt of quality bonuses was associated with subsequent improvement in performance. This raises questions about whether winning in pay-for-performance programs, such as Hospital Value-Based Purchasing, will lead to subsequent quality improvement. PMID:23909992
Spontaneous Trigeminal Allodynia in Rats: A Model of Primary Headache
Oshinsky, Michael L.; Sanghvi, Menka M.; Maxwell, Christina R.; Gonzalez, Dorian; Spangenberg, Rebecca J.; Cooper, Marnie; Silberstein, Stephen D.
2014-01-01
Animal models are essential for studying the pathophysiology of headache disorders and as a screening tool for new therapies. Most animal models modify a normal animal in an attempt to mimic migraine symptoms. They require manipulation to activate the trigeminal nerve or dural nociceptors. At best, they are models of secondary headache. No existing model can address the fundamental question: How is a primary headache spontaneously initiated? In the process of obtaining baseline periorbital von Frey thresholds in a wild-type Sprague-Dawley rat, we discovered a rat with spontaneous episodic trigeminal allodynia (manifested by episodically changing periorbital pain threshold). Subsequent mating showed that the trait is inherited. Animals with spontaneous trigeminal allodynia allow us to study the pathophysiology of primary recurrent headache disorders. To validate this as a model for migraine, we tested the effects of clinically proven acute and preventive migraine treatments on spontaneous changes in rat periorbital sensitivity. Sumatriptan, ketorolac, and dihydroergotamine temporarily reversed the low periorbital pain thresholds. Thirty days of chronic valproic acid treatment prevented spontaneous changes in trigeminal allodynia. After discontinuation, the rats returned to their baseline of spontaneous episodic threshold changes. We also tested the effects of known chemical human migraine triggers. On days when the rats did not have allodynia and showed normal periorbital von Frey thresholds, glycerol trinitrate and calcitonin gene related peptide induced significant decreases in the periorbital pain threshold. This model can be used as a predictive model for drug development and for studies of putative biomarkers for headache diagnosis and treatment. PMID:22963523
Vukovic, N; Radovanovic, J; Milanovic, V; Boiko, D L
2016-11-14
We have obtained a closed-form expression for the threshold of Risken-Nummedal-Graham-Haken (RNGH) multimode instability in a Fabry-Pérot (FP) cavity quantum cascade laser (QCL). This simple analytical expression is a versatile tool that can easily be applied in practical situations which require analysis of QCL dynamic behavior and estimation of its RNGH multimode instability threshold. Our model for a FP cavity laser accounts for the carrier coherence grating and carrier population grating as well as their relaxation due to carrier diffusion. In the model, the RNGH instability threshold is analyzed using a second-order bi-orthogonal perturbation theory and we confirm our analytical solution by a comparison with the numerical simulations. In particular, the model predicts a low RNGH instability threshold in QCLs. This agrees very well with experimental data available in the literature.
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
Bottema-Beutel, Kristen
2016-10-01
Using a structured literature search and meta-regression procedures, this study sought to determine whether associations between joint attention and language are moderated by group (autism spectrum disorder [ASD] vs. typical development [TD]), joint attention type (responding to joint attention [RJA] vs. other), and other study design features and participant characteristics. Studies were located using database searches, hand searches, and electronic requests for data from experts in the field. This resulted in 71 reports or datasets and 605 effect sizes, representing 1,859 participants with ASD and 1,835 TD participants. Meta-regression was used to answer research questions regarding potential moderators of the effect sizes of interest, which were Pearson's r values quantifying the association between joint attention and language variables. In the final models, conducted separately for each language variable, effect sizes were significantly higher for the ASD group as compared to the TD group, and for RJA as compared to non-RJA joint attention types. Approximate mental age trended toward significance for the expressive language model. Joint attention may be more tightly tied to language in children with ASD as compared to TD children because TD children exhibit joint attention at sufficient thresholds so that language development becomes untethered to variations in joint attention. Conversely, children with ASD who exhibit deficits in joint attention develop language contingent upon their joint attention abilities. Because RJA was more strongly related to language than other types of joint attention, future research should involve careful consideration of the operationalization and measurement of joint attention constructs. Autism Res 2016, 9: 1021-1035. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Zellmer, Erik R; MacEwan, Matthew R; Moran, Daniel W
2018-04-01
Regenerated peripheral nervous tissue possesses different morphometric properties compared to undisrupted nerve. It is poorly understood how these morphometric differences alter the response of the regenerated nerve to electrical stimulation. In this work, we use computational modeling to explore the electrophysiological response of regenerated and undisrupted nerve axons to electrical stimulation delivered by macro-sieve electrodes (MSEs). A 3D finite element model of a peripheral nerve segment populated with mammalian myelinated axons and implanted with a macro-sieve electrode has been developed. Fiber diameters and morphometric characteristics representative of undisrupted or regenerated peripheral nervous tissue were assigned to core conductor models to simulate the two tissue types. Simulations were carried out to quantify differences in thresholds and chronaxie between undisrupted and regenerated fiber populations. The model was also used to determine the influence of axonal caliber on recruitment thresholds for the two tissue types. Model accuracy was assessed through comparisons with in vivo recruitment data from chronically implanted MSEs. Recruitment thresholds of individual regenerated fibers with diameters >2 µm were found to be lower compared to same caliber undisrupted fibers at electrode to fiber distances of less than about 90-140 µm but roughly equal or higher for larger distances. Caliber redistributions observed in regenerated nerve resulted in an overall increase in average recruitment thresholds and chronaxie during whole nerve stimulation. Modeling results also suggest that large diameter undisrupted fibers located close to a longitudinally restricted current source such as the MSE have higher average recruitment thresholds compared to small diameter fibers. In contrast, large diameter regenerated nerve fibers located in close proximity of MSE sites have, on average, lower recruitment thresholds compared to small fibers. Utilizing regenerated fiber morphometry and caliber distributions resulted in accurate predictions of in vivo recruitment data. Our work uses computational modeling to show how morphometric differences between regenerated and undisrupted tissue results in recruitment threshold discrepancies, quantifies these differences, and illustrates how large undisrupted nerve fibers close to longitudinally restricted current sources have higher recruitment thresholds compared to adjacently positioned smaller fibers while the opposite is true for large regenerated fibers.
NASA Astrophysics Data System (ADS)
Zellmer, Erik R.; MacEwan, Matthew R.; Moran, Daniel W.
2018-04-01
Objective. Regenerated peripheral nervous tissue possesses different morphometric properties compared to undisrupted nerve. It is poorly understood how these morphometric differences alter the response of the regenerated nerve to electrical stimulation. In this work, we use computational modeling to explore the electrophysiological response of regenerated and undisrupted nerve axons to electrical stimulation delivered by macro-sieve electrodes (MSEs). Approach. A 3D finite element model of a peripheral nerve segment populated with mammalian myelinated axons and implanted with a macro-sieve electrode has been developed. Fiber diameters and morphometric characteristics representative of undisrupted or regenerated peripheral nervous tissue were assigned to core conductor models to simulate the two tissue types. Simulations were carried out to quantify differences in thresholds and chronaxie between undisrupted and regenerated fiber populations. The model was also used to determine the influence of axonal caliber on recruitment thresholds for the two tissue types. Model accuracy was assessed through comparisons with in vivo recruitment data from chronically implanted MSEs. Main results. Recruitment thresholds of individual regenerated fibers with diameters >2 µm were found to be lower compared to same caliber undisrupted fibers at electrode to fiber distances of less than about 90-140 µm but roughly equal or higher for larger distances. Caliber redistributions observed in regenerated nerve resulted in an overall increase in average recruitment thresholds and chronaxie during whole nerve stimulation. Modeling results also suggest that large diameter undisrupted fibers located close to a longitudinally restricted current source such as the MSE have higher average recruitment thresholds compared to small diameter fibers. In contrast, large diameter regenerated nerve fibers located in close proximity of MSE sites have, on average, lower recruitment thresholds compared to small fibers. Utilizing regenerated fiber morphometry and caliber distributions resulted in accurate predictions of in vivo recruitment data. Significance. Our work uses computational modeling to show how morphometric differences between regenerated and undisrupted tissue results in recruitment threshold discrepancies, quantifies these differences, and illustrates how large undisrupted nerve fibers close to longitudinally restricted current sources have higher recruitment thresholds compared to adjacently positioned smaller fibers while the opposite is true for large regenerated fibers.
NASA Astrophysics Data System (ADS)
Svobodová, Eva; Trnka, Miroslav; Kopp, Radovan; Mareš, Jan; Dubrovský, Martin; Spurný, Petr; Žalud, Zděněk
2015-04-01
Freshwater fish production is significantly correlated with water temperature which is expected to increase under the climate change. This study is dealing with the estimation of the change of water temperature in productive ponds and its impact on the fishery in the Czech Republic. Calculation of surface-water temperature which was based on three-day mean of the air temperature was developed and tested in several ponds in three main fish production areas. Output of surface-water temperature model was compared with measured data and showed that the lower range of model accuracy is surface-water temperature 3°C, under this temperature threshold the model loses its predictive competence. In the expecting of surface-water temperature above the temperature 3°C the model has proved the well consistence between observed and modelled surface-water temperature (R 0.79 - 0.96). Verified model was applied in the conditions of climate change determined by the pattern scaling method, in which standardised scenarios were derived from five global circulation models MPEH5, CSMK3, IPCM4, GFCM21 and HADGEM. Results were evaluated with regard to thresholds which characterise the fish species requirements on water temperature. Used thresholds involved the upper temperature threshold for fish survival and the tolerable number of days in continual period with mentioned threshold surface-water temperature. Target fish species were Common carp (Cyprinus carpio), Maraene whitefish (Coregonus maraena), Northern whitefish (Coregonus peled) and Rainbow trout (Oncorhynchus mykis). Results indicated the limitation of the Czech fish-farming in terms of i) the increase of the length of continual periods with surface-water temperature above the threshold appropriate to given fish species toleration, ii) the increase of the number of continual periods with surface-water temperature above the threshold, both appropriate to given fish species toleration, and iii) the increase of overall number of days within the continual period with temperature above the threshold tolerated by given fish species. ACKNOWLEDGEMENTS: This study was funded by project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.