Sample records for variable-ratio threshold prediction

  1. The hip strength:ankle proprioceptive threshold ratio predicts falls and injury in diabetic neuropathy

    PubMed Central

    Richardson, James K.; DeMott, Trina; Allet, Lara; Kim; Ashton-Miller, James A.

    2014-01-01

    Introduction We determined lower limb neuromuscular capacities associated with falls and fall-related injuries in older people with declining peripheral nerve function. Methods Thirty-two subjects (67.4 ± 13.4 years; 19 with type 2 diabetes), representing a spectrum of peripheral neurologic function, were evaluated with frontal plane proprioceptive thresholds at the ankle, frontal plane motor function at the ankle and hip, and prospective follow-up for 1 year. Results Falls and fall-related injuries were reported by 20 (62.5%) and 14 (43.8%) subjects, respectively. The ratio of hip adductor rate of torque development to ankle proprioceptive threshold (HipSTR/AnkPRO) predicted falls (pseudo-R2 = .726) and injury (pseudo-R2 = .382). No other variable maintained significance in the presence of HipSTR/AnkPRO. Discussion Fall and injury risk in the population studied is related inversely to HipSTR/AnkPRO. Increasing rapidly available hip strength in patients with neuropathic ankle sensory impairment may decrease risk of falls and related injuries. PMID:24282041

  2. Physical Screening Predictors for Success in Completing Air Force Phase II Air Liaison Officer Aptitude Assessment.

    PubMed

    McGee, John Christopher; Wilson, Eric; Barela, Haley; Blum, Sharon

    2017-03-01

    Air Liaison Officer Aptitude Assessment (AAA) attrition is often associated with a lack of candidate physical preparation. The Functional Movement Screen, Tactical Fitness Assessment, and fitness metrics were collected (n = 29 candidates) to determine what physical factors could predict a candidate s success in completing AAA. Between-group comparisons were made between candidates completing AAA versus those who did not (p < 0.05). Upper 50% thresholds were established for all variables with R 2 < 0.8 and the data were converted to a binary form (0 = did not attain threshold, 1 = attained threshold). Odds-ratios, pre/post-test probabilities and positive likelihood ratios were computed and logistic regression applied to explain model variance. The following variables provided the most predictive value for AAA completion: Pull-ups (p = 0.01), Sit-ups (p = 0.002), Relative Powerball Toss (p = 0.017), and Pull-ups × Sit-ups interaction (p = 0.016). Minimum recommended guidelines for AAA screening are Pull-ups (10 maximum), Sit-ups (76/2 minutes), and a Relative Powerball Toss of 0.6980 ft × lb/BW. Associated benefits could be higher graduation rates, and a cost-savings associated from temporary duty and possible injury care for nonselected candidates. Recommended guidelines should be validated in future class cycles. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  3. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  4. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  5. A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and Kappa

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen

    2008-01-01

    Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...

  6. Cost-effectiveness of different strategies for selecting and treating individuals at increased risk of osteoporosis or osteopenia: a systematic review.

    PubMed

    Müller, Dirk; Pulm, Jannis; Gandjour, Afschin

    2012-01-01

    To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Near-threshold fatigue behavior of copper alloys in air and aqueous environments: A high cyclic frequency study

    NASA Astrophysics Data System (ADS)

    Ahmed, Tawfik M.

    The near-threshold fatigue crack propagation behavior of alpha-phase copper alloys in desiccated air and several aqueous environments has been investigated. Three commercial alloys of nominal composition Cu-30Ni (Cu-Ni), Cu-30Zn (Cu-Zn) and 90Cu-7Al-3Fe (Cu-Al) were tested. Fatigue tests were conducted using standard prefatigued single edged notched (SEN) specimens loaded in tension at a high frequency of ˜100 Hz. Different R-ratios were employed, mostly at R-ratios of 0.5. Low loading levels were used that corresponded to the threshold and near-threshold regions where Delta Kth ≤ DeltaK ≤ 11 MPa√m. Fatigue tests in the aqueous solutions showed that the effect of different corrosive environments during high frequency testing (˜100 Hz) was not as pronounced as was expected when compared relative to air. Further testing revealed that environmental effects were present and fatigue crack growth rates were influenced by the fluid-induced closure effects which are generally reported in the fatigue literature to be operative only in viscous liquids, not in aqueous solutions. It was concluded that high frequency testing in aqueous environments consistently decreased crack growth rates in a manner similar to crack retardation effects in viscous fluids. Several theoretical models reported in the literature have underestimated, if not failed, to adequately predict the fluid induced closure in aqueous solutions. Results from the desiccated air tests confirmed that, under closure-free conditions (high R-ratios), both threshold values and fatigue crack growth rate of stage II can be related to Young's modulus, in agreement with results from the literature. The role of different mechanical and environmental variables on fatigue behavior becomes most visible in the low R -ratio regime, and contribute to various closure processes.

  8. [Application of artificial neural networks on the prediction of surface ozone concentrations].

    PubMed

    Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei

    2011-08-01

    Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.

  9. Audiometric Predictions Using SFOAE and Middle-Ear Measurements

    PubMed Central

    Ellison, John C.; Keefe, Douglas H.

    2006-01-01

    Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898

  10. Sometimes processes don't matter: the general effect of short term climate variability on erosional systems.

    NASA Astrophysics Data System (ADS)

    Deal, Eric; Braun, Jean

    2017-04-01

    Climatic forcing undoubtedly plays an important role in shaping the Earth's surface. However, precisely how climate affects erosion rates, landscape morphology and the sedimentary record is highly debated. Recently there has been a focus on the influence of short-term variability in rainfall and river discharge on the relationship between climate and erosion rates. Here, we present a simple probabilistic argument, backed by modelling, that demonstrates that the way the Earth's surface responds to short-term climatic forcing variability is primarily determined by the existence and magnitude of erosional thresholds. We find that it is the ratio between the threshold magnitude and the mean magnitude of climatic forcing that determines whether variability matters or not and in which way. This is a fundamental result that applies regardless of the nature of the erosional process. This means, for example, that we can understand the role that discharge variability plays in determining fluvial erosion efficiency despite doubts about the processes involved in fluvial erosion. We can use this finding to reproduce the main conclusions of previous studies on the role of discharge variability in determining long-term fluvial erosion efficiency. Many aspects of the landscape known to influence discharge variability are affected by human activity, such as land use and river damming. Another important control on discharge variability, rainfall intensity, is also expected to increase with warmer temperatures. Among many other implications, our findings help provide a general framework to understand and predict the response of the Earth's surface to changes in mean and variability of rainfall and river discharge associated with the anthropogenic activity. In addition, the process independent nature of our findings suggest that previous work on river discharge variability and erosion thresholds can be applied to other erosional systems.

  11. Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea

    PubMed Central

    Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert

    2017-01-01

    Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154

  12. Metabolic Tumor Volume and Total Lesion Glycolysis in Oropharyngeal Cancer Treated With Definitive Radiotherapy: Which Threshold Is the Best Predictor of Local Control?

    PubMed

    Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O

    2017-06-01

    In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.

  13. Normal Lung Quantification in Usual Interstitial Pneumonia Pattern: The Impact of Threshold-based Volumetric CT Analysis for the Staging of Idiopathic Pulmonary Fibrosis.

    PubMed

    Ohkubo, Hirotsugu; Kanemitsu, Yoshihiro; Uemura, Takehiro; Takakuwa, Osamu; Takemura, Masaya; Maeno, Ken; Ito, Yutaka; Oguri, Tetsuya; Kazawa, Nobutaka; Mikami, Ryuji; Niimi, Akio

    2016-01-01

    Although several computer-aided computed tomography (CT) analysis methods have been reported to objectively assess the disease severity and progression of idiopathic pulmonary fibrosis (IPF), it is unclear which method is most practical. A universal severity classification system has not yet been adopted for IPF. The purpose of this study was to test the correlation between quantitative-CT indices and lung physiology variables and to determine the ability of such indices to predict disease severity in IPF. A total of 27 IPF patients showing radiological UIP pattern on high-resolution (HR) CT were retrospectively enrolled. Staging of IPF was performed according to two classification systems: the Japanese and GAP (gender, age, and physiology) staging systems. CT images were assessed using a commercially available CT imaging analysis workstation, and the whole-lung mean CT value (MCT), the normally attenuated lung volume as defined from -950 HU to -701 Hounsfield unit (NL), the volume of the whole lung (WL), and the percentage of NL to WL (NL%), were calculated. CT indices (MCT, WL, and NL) closely correlated with lung physiology variables. Among them, NL strongly correlated with forced vital capacity (FVC) (r = 0.92, P <0.0001). NL% showed a large area under the receiver operating characteristic curve for detecting patients in the moderate or advanced stages of IPF. Multivariable logistic regression analyses showed that NL% is significantly more useful than the percentages of predicted FVC and predicted diffusing capacity of the lungs for carbon monoxide (Japanese stage II/III/IV [odds ratio, 0.73; 95% confidence intervals (CI), 0.48 to 0.92; P < 0.01]; III/IV [odds ratio. 0.80; 95% CI 0.59 to 0.96; P < 0.01]; GAP stage II/III [odds ratio, 0.79; 95% CI, 0.56 to 0.97; P < 0.05]). The measurement of NL% by threshold-based volumetric CT analysis may help improve IPF staging.

  14. Modeled summer background concentration nutrients and ...

    EPA Pesticide Factsheets

    We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri

  15. Dynamical predictors of an imminent phenotypic switch in bacteria

    NASA Astrophysics Data System (ADS)

    Wang, Huijing; Ray, J. Christian J.

    2017-08-01

    Single cells can stochastically switch across thresholds imposed by regulatory networks. Such thresholds can act as a tipping point, drastically changing global phenotypic states. In ecology and economics, imminent transitions across such tipping points can be predicted using dynamical early warning indicators. A typical example is ‘flickering’ of a fast variable, predicting a longer-lasting switch from a low to a high state or vice versa. Considering the different timescales between metabolite and protein fluctuations in bacteria, we hypothesized that metabolic early warning indicators predict imminent transitions across a network threshold caused by enzyme saturation. We used stochastic simulations to determine if flickering predicts phenotypic transitions, accounting for a variety of molecular physiological parameters, including enzyme affinity, burstiness of enzyme gene expression, homeostatic feedback, and rates of metabolic precursor influx. In most cases, we found that metabolic flickering rates are robustly peaked near the enzyme saturation threshold. The degree of fluctuation was amplified by product inhibition of the enzyme. We conclude that sensitivity to flickering in fast variables may be a possible natural or synthetic strategy to prepare physiological states for an imminent transition.

  16. Higher-than-predicted saltation threshold wind speeds on Titan.

    PubMed

    Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P

    2015-01-01

    Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.

  17. Percolation in suspensions of hard nanoparticles: From spheres to needles

    NASA Astrophysics Data System (ADS)

    Schilling, Tanja; Miller, Mark A.; van der Schoot, Paul

    2015-09-01

    We investigate geometric percolation and scaling relations in suspensions of nanorods, covering the entire range of aspect ratios from spheres to extremely slender needles. A new version of connectedness percolation theory is introduced and tested against specialised Monte Carlo simulations. The theory accurately predicts percolation thresholds for aspect ratios of rod length to width as low as 10. The percolation threshold for rod-like particles of aspect ratios below 1000 deviates significantly from the inverse aspect ratio scaling prediction, thought to be valid in the limit of infinitely slender rods and often used as a rule of thumb for nanofibres in composite materials. Hence, most fibres that are currently used as fillers in composite materials cannot be regarded as practically infinitely slender for the purposes of percolation theory. Comparing percolation thresholds of hard rods and new benchmark results for ideal rods, we find that i) for large aspect ratios, they differ by a factor that is inversely proportional to the connectivity distance between the hard cores, and ii) they approach the slender rod limit differently.

  18. Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.

    PubMed

    Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert

    2015-09-01

    The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  19. Individualized Prediction of Heat Stress in Firefighters: A Data-Driven Approach Using Classification and Regression Trees.

    PubMed

    Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit

    2015-01-01

    The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.

  20. Determining the Threshold for HbA1c as a Predictor for Adverse Outcomes After Total Joint Arthroplasty: A Multicenter, Retrospective Study.

    PubMed

    Tarabichi, Majd; Shohat, Noam; Kheir, Michael M; Adelani, Muyibat; Brigati, David; Kearns, Sean M; Patel, Pankajkumar; Clohisy, John C; Higuera, Carlos A; Levine, Brett R; Schwarzkopf, Ran; Parvizi, Javad; Jiranek, William A

    2017-09-01

    Although HbA1c is commonly used for assessing glycemic control before surgery, there is no consensus regarding its role and the appropriate threshold in predicting adverse outcomes. This study was designed to evaluate the potential link between HbA1c and subsequent periprosthetic joint infection (PJI), with the intention of determining the optimal threshold for HbA1c. This is a multicenter retrospective study, which identified 1645 diabetic patients who underwent primary total joint arthroplasty (1004 knees and 641 hips) between 2001 and 2015. All patients had an HbA1c measured within 3 months of surgery. The primary outcome of interest was a PJI at 1 year based on the Musculoskeletal Infection Society criteria. Secondary outcomes included orthopedic (wound and mechanical complications) and nonorthopedic complications (sepsis, thromboembolism, genitourinary, and cardiovascular complications). A regression analysis was performed to determine the independent influence of HbA1c for predicting PJI. Overall 22 cases of PJI occurred at 1 year (1.3%). HbA1c at a threshold of 7.7 was distinct for predicting PJI (area under the curve, 0.65; 95% confidence interval, 0.51-0.78). Using this threshold, PJI rates increased from 0.8% (11 of 1441) to 5.4% (11 of 204). In the stepwise logistic regression analysis, PJI remained the only variable associated with higher HbA1c (odds ratio, 1.5; confidence interval, 1.2-2.0; P = .0001). There was no association between high HbA1c levels and other complications assessed. High HbA1c levels are associated with an increased risk for PJI. A threshold of 7.7% seems to be more indicative of infection than the commonly used 7% and should perhaps be the goal in preoperative patient optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    PubMed

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  2. Carbon deposition thresholds on nickel-based solid oxide fuel cell anodes II. Steam:carbon ratio and current density

    NASA Astrophysics Data System (ADS)

    Kuhn, J.; Kesler, O.

    2015-03-01

    For the second part of a two part publication, coking thresholds with respect to molar steam:carbon ratio (SC) and current density in nickel-based solid oxide fuel cells were determined. Anode-supported button cell samples were exposed to 2-component and 5-component gas mixtures with 1 ≤ SC ≤ 2 and zero fuel utilization for 10 h, followed by measurement of the resulting carbon mass. The effect of current density was explored by measuring carbon mass under conditions known to be prone to coking while increasing the current density until the cell was carbon-free. The SC coking thresholds were measured to be ∼1.04 and ∼1.18 at 600 and 700 °C, respectively. Current density experiments validated the thresholds measured with respect to fuel utilization and steam:carbon ratio. Coking thresholds at 600 °C could be predicted with thermodynamic equilibrium calculations when the Gibbs free energy of carbon was appropriately modified. Here, the Gibbs free energy of carbon on nickel-based anode support cermets was measured to be -6.91 ± 0.08 kJ mol-1. The results of this two part publication show that thermodynamic equilibrium calculations with appropriate modification to the Gibbs free energy of solid-phase carbon can be used to predict coking thresholds on nickel-based anodes at 600-700 °C.

  3. Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing

    NASA Astrophysics Data System (ADS)

    Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.

    2013-12-01

    The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.

  4. Assessing conservation relevance of organism-environment relations using predicted changes in response variables

    USGS Publications Warehouse

    Gutzwiller, Kevin J.; Barrow, Wylie C.; White, Joseph D.; Johnson-Randall, Lori; Cade, Brian S.; Zygo, Lisa M.

    2010-01-01

    1. Organism–environment models are used widely in conservation. The degree to which they are useful for informing conservation decisions – the conservation relevance of these relations – is important because lack of relevance may lead to misapplication of scarce conservation resources or failure to resolve important conservation dilemmas. Even when models perform well based on model fit and predictive ability, conservation relevance of associations may not be clear without also knowing the magnitude and variability of predicted changes in response variables. 2. We introduce a method for evaluating the conservation relevance of organism–environment relations that employs confidence intervals for predicted changes in response variables. The confidence intervals are compared to a preselected magnitude of change that marks a threshold (trigger) for conservation action. To demonstrate the approach, we used a case study from the Chihuahuan Desert involving relations between avian richness and broad-scale patterns of shrubland. We considered relations for three winters and two spatial extents (1- and 2-km-radius areas) and compared predicted changes in richness to three thresholds (10%, 20% and 30% change). For each threshold, we examined 48 relations. 3. The method identified seven, four and zero conservation-relevant changes in mean richness for the 10%, 20% and 30% thresholds respectively. These changes were associated with major (20%) changes in shrubland cover, mean patch size, the coefficient of variation for patch size, or edge density but not with major changes in shrubland patch density. The relative rarity of conservation-relevant changes indicated that, overall, the relations had little practical value for informing conservation decisions about avian richness. 4. The approach we illustrate is appropriate for various response and predictor variables measured at any temporal or spatial scale. The method is broadly applicable across ecological environments, conservation objectives, types of statistical predictive models and levels of biological organization. By focusing on magnitudes of change that have practical significance, and by using the span of confidence intervals to incorporate uncertainty of predicted changes, the method can be used to help improve the effectiveness of conservation efforts.

  5. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy.

    PubMed

    Grebenstein, Patricia E; Burroughs, Danielle; Roiko, Samuel A; Pentel, Paul R; LeSage, Mark G

    2015-06-01

    The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. The present study examined these issues in a rodent nicotine self-administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy*

    PubMed Central

    Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.

    2015-01-01

    Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231

  7. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    PubMed

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  8. Use of Biotechnological Devices in the Quantification of Psychophysiological Workload of Professional Chess Players.

    PubMed

    Fuentes, Juan P; Villafaina, Santos; Collado-Mateo, Daniel; de la Vega, Ricardo; Gusi, Narcis; Clemente-Suárez, Vicente Javier

    2018-01-19

    Psychophysiological requirements of chess players are poorly understood, and periodization of training is often made without any empirical basis. For this reason, the aim of the present study was to investigate the psychophysiological response and quantify the player internal load during, and after playing a chess game. The participant was an elite 33 year-old male chess player ranked among the 300 best chess players in the world. Thus, cortical arousal by critical flicker fusion threshold, electroencephalogram by the theta Fz/alpha Pz ratio and autonomic modulation by heart rate variability were analyzed. Data revealed that cortical arousal by critical flicker fusion threshold and theta Fz/alpha Pz ratio increased and heart rate variability decreased during chess game. All these changes indicated that internal load increased during the chess game. In addition, pre-activation was detected in pre-game measure, suggesting that the prefrontal cortex might be preparatory activated. For these reasons, electroencephalogram, critical flicker fusion threshold and heart rate variability analysis may be highly applicable tools to control and monitor workload in chess player.

  9. Comparison of body mass index, waist circumference, and waist to height ratio in the prediction of hypertension and diabetes mellitus: Filipino-American women cardiovascular study.

    PubMed

    Battie, Cynthia A; Borja-Hart, Nancy; Ancheta, Irma B; Flores, Rene; Rao, Goutham; Palaniappan, Latha

    2016-12-01

    The relative ability of three obesity indices to predict hypertension (HTN) and diabetes (DM) and the validity of using Asian-specific thresholds of these indices were examined in Filipino-American women (FAW). Filipino-American women ( n  = 382), 40-65 years of age were screened for hypertension (HTN) and diabetes (DM) in four major US cities. Body mass index (BMI), waist circumference (WC) and waist circumference to height ratio (WHtR) were measured. ROC analyses determined that the three obesity measurements were similar in predicting HTN and DM (AUC: 0.6-0.7). The universal WC threshold of ≥ 35 in. missed 13% of the hypertensive patients and 12% of the diabetic patients. The Asian WC threshold of ≥ 31.5 in. increased detection of HTN and DM but with a high rate of false positives. The traditional BMI ≥ 25 kg/m 2 threshold missed 35% of those with hypertension and 24% of those with diabetes. The Asian BMI threshold improved detection but resulted in a high rate of false positives. The suggested WHtR cut-off of ≥ 0.5 missed only 1% of those with HTN and 0% of those with DM. The three obesity measurements had similar but modest ability to predict HTN and DM in FAW. Using Asian-specific thresholds increased accuracy but with a high rate of false positives. Whether FAW, especially at older ages, should be encouraged to reach these lower thresholds needs further investigation because of the high false positive rates.

  10. Effect of MR Imaging Contrast Thresholds on Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes: A Subgroup Analysis of the ACRIN 6657/I-SPY 1 TRIAL

    PubMed Central

    Li, Wen; Arasu, Vignesh; Newitt, David C.; Jones, Ella F.; Wilmes, Lisa; Gibbs, Jessica; Kornak, John; Joe, Bonnie N.; Esserman, Laura J.; Hylton, Nola M.

    2016-01-01

    Functional tumor volume (FTV) measurements by dynamic contrast-enhanced magnetic resonance imaging can predict treatment outcomes for women receiving neoadjuvant chemotherapy for breast cancer. Here, we explore whether the contrast thresholds used to define FTV could be adjusted by breast cancer subtype to improve predictive performance. Absolute FTV and percent change in FTV (ΔFTV) at sequential time-points during treatment were calculated and investigated as predictors of pathologic complete response at surgery. Early percent enhancement threshold (PEt) and signal enhancement ratio threshold (SERt) were varied. The predictive performance of resulting FTV predictors was evaluated using the area under the receiver operating characteristic curve. A total number of 116 patients were studied both as a full cohort and in the following groups defined by hormone receptor (HR) and HER2 receptor subtype: 45 HR+/HER2−, 39 HER2+, and 30 triple negatives. High AUCs were found at different ranges of PEt and SERt levels in different subtypes. Findings from this study suggest that the predictive performance to treatment response by MRI varies by contrast thresholds, and that pathologic complete response prediction may be improved through subtype-specific contrast enhancement thresholds. A validation study is underway with a larger patient population. PMID:28066808

  11. Evaluation of waist-to-height ratio to predict 5 year cardiometabolic risk in sub-Saharan African adults.

    PubMed

    Ware, L J; Rennie, K L; Kruger, H S; Kruger, I M; Greeff, M; Fourie, C M T; Huisman, H W; Scheepers, J D W; Uys, A S; Kruger, R; Van Rooyen, J M; Schutte, R; Schutte, A E

    2014-08-01

    Simple, low-cost central obesity measures may help identify individuals with increased cardiometabolic disease risk, although it is unclear which measures perform best in African adults. We aimed to: 1) cross-sectionally compare the accuracy of existing waist-to-height ratio (WHtR) and waist circumference (WC) thresholds to identify individuals with hypertension, pre-diabetes, or dyslipidaemia; 2) identify optimal WC and WHtR thresholds to detect CVD risk in this African population; and 3) assess which measure best predicts 5-year CVD risk. Black South Africans (577 men, 942 women, aged >30years) were recruited by random household selection from four North West Province communities. Demographic and anthropometric measures were taken. Recommended diagnostic thresholds (WC > 80 cm for women, >94 cm for men; WHtR > 0.5) were evaluated to predict blood pressure, fasting blood glucose, lipids, and glycated haemoglobin measured at baseline and 5 year follow up. Women were significantly more overweight than men at baseline (mean body mass index (BMI) women 27.3 ± 7.4 kg/m(2), men 20.9 ± 4.3 kg/m(2)); median WC women 81.9 cm (interquartile range 61-103), men 74.7 cm (63-87 cm), all P < 0.001). In women, both WC and WHtR significantly predicted all cardiometabolic risk factors after 5 years. In men, even after adjusting WC threshold based on ROC analysis, WHtR better predicted overall 5-year risk. Neither measure predicted hypertension in men. The WHtR threshold of >0.5 appears to be more consistently supported and may provide a better predictor of future cardiometabolic risk in sub-Saharan Africa. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. What matters after sleeve gastrectomy: patient characteristics or surgical technique?

    PubMed

    Dhar, Vikrom K; Hanseman, Dennis J; Watkins, Brad M; Paquette, Ian M; Shah, Shimul A; Thompson, Jonathan R

    2018-03-01

    The impact of operative technique on outcomes in laparoscopic sleeve gastrectomy has been explored previously; however, the relative importance of patient characteristics remains unknown. Our aim was to characterize national variability in operative technique for laparoscopic sleeve gastrectomy and determine whether patient-specific factors are more critical to predicting outcomes. We queried the database of the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program for laparoscopic sleeve gastrostomies performed in 2015 (n = 88,845). Logistic regression models were used to determine predictors of postoperative outcomes. In 2015, >460 variations of laparoscopic sleeve gastrectomy were performed based on combinations of bougie size, distance from the pylorus, use of staple line reinforcement, and oversewing of the staple line. Despite such substantial variability, technique variants were not predictive of outcomes, including perioperative morbidity, leak, or bleeding (all P ≥ .05). Instead, preoperative patient characteristics were found to be more predictive of these outcomes after laparoscopic sleeve gastrectomy. Only history of gastroesophageal disease (odds ratio 1.44, 95% confidence interval 1.08-1.91, P < .01) was associated with leak. Considerable variability exists in technique among surgeons nationally, but patient characteristics are more predictive of adverse outcomes after laparoscopic sleeve gastrectomy. Bundled payments and reimbursement policies should account for patient-specific factors in addition to current accreditation and volume thresholds when deciding risk-adjustment strategies. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Optimized breast MRI functional tumor volume as a biomarker of recurrence-free survival following neoadjuvant chemotherapy.

    PubMed

    Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M

    2014-08-01

    To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.

  14. Crossing the Threshold From Porn Use to Porn Problem: Frequency and Modality of Porn Use as Predictors of Sexually Coercive Behaviors.

    PubMed

    Marshall, Ethan A; Miller, Holly A; Bouffard, Jeff A

    2017-11-01

    According to recent statistics, as many as one in five female college students are victims of sexual assault during their college career. To combat what has been called the "Campus Rape Crisis," researchers have attempted to understand what variables are associated with sexually coercive behaviors in college males. Although investigators have found support for the relationship between pornography consumption and sexually coercive behavior, researchers typically operationalize pornography use in terms of frequency of use. Furthermore, frequency of use has been assessed vaguely and inconsistently. The current study offered a more concrete assessment of frequency of use and an additional variable not yet included for pornography use: number of modalities. Beyond examining the relationship between pornography use and sexual coercion likelihood, the current study was the first to use pornography variables in a threshold analysis to test whether there is a cut point that is predictive of sexual coercion likelihood. Analyses were conducted with a sample of 463 college males. Results indicated that both pornography use variables were significantly related to a higher likelihood of sexually coercive behaviors. When both frequency of use and number of modalities were included in the model, modalities were significant and frequency was not. In addition, significant thresholds for both pornography variables that predicted sexual coercion likelihood were identified. These results imply that factors other than frequency of use, such as number of modalities, may be more important for the prediction of sexual coercive behaviors. Furthermore, threshold analyses revealed the most significant increase in risk occurred between one modality and two, indicating that it is not pornography use in general that is related to sexual coercion likelihood, but rather, specific aspects of pornography use.

  15. Quantifying patterns of change in marine ecosystem response to multiple pressures.

    PubMed

    Large, Scott I; Fay, Gavin; Friedland, Kevin D; Link, Jason S

    2015-01-01

    The ability to understand and ultimately predict ecosystem response to multiple pressures is paramount to successfully implement ecosystem-based management. Thresholds shifts and nonlinear patterns in ecosystem responses can be used to determine reference points that identify levels of a pressure that may drastically alter ecosystem status, which can inform management action. However, quantifying ecosystem reference points has proven elusive due in large part to the multi-dimensional nature of both ecosystem pressures and ecosystem responses. We used ecological indicators, synthetic measures of ecosystem status and functioning, to enumerate important ecosystem attributes and to reduce the complexity of the Northeast Shelf Large Marine Ecosystem (NES LME). Random forests were used to quantify the importance of four environmental and four anthropogenic pressure variables to the value of ecological indicators, and to quantify shifts in aggregate ecological indicator response along pressure gradients. Anthropogenic pressure variables were critical defining features and were able to predict an average of 8-13% (up to 25-66% for individual ecological indicators) of the variation in ecological indicator values, whereas environmental pressures were able to predict an average of 1-5 % (up to 9-26% for individual ecological indicators) of ecological indicator variation. Each pressure variable predicted a different suite of ecological indicator's variation and the shapes of ecological indicator responses along pressure gradients were generally nonlinear. Threshold shifts in ecosystem response to exploitation, the most important pressure variable, occurred when commercial landings were 20 and 60% of total surveyed biomass. Although present, threshold shifts in ecosystem response to environmental pressures were much less important, which suggests that anthropogenic pressures have significantly altered the ecosystem structure and functioning of the NES LME. Gradient response curves provide ecologically informed transformations of pressure variables to explain patterns of ecosystem structure and functioning. By concurrently identifying thresholds for a suite of ecological indicator responses to multiple pressures, we demonstrate that ecosystem reference points can be evaluated and used to support ecosystem-based management.

  16. Quantitative Sensory Testing Predicts Pregabalin Efficacy in Painful Chronic Pancreatitis

    PubMed Central

    Olesen, Søren S.; Graversen, Carina; Bouwense, Stefan A. W.; van Goor, Harry; Wilder-Smith, Oliver H. G.; Drewes, Asbjørn M.

    2013-01-01

    Background A major problem in pain medicine is the lack of knowledge about which treatment suits a specific patient. We tested the ability of quantitative sensory testing to predict the analgesic effect of pregabalin and placebo in patients with chronic pancreatitis. Methods Sixty-four patients with painful chronic pancreatitis received pregabalin (150–300 mg BID) or matching placebo for three consecutive weeks. Analgesic effect was documented in a pain diary based on a visual analogue scale. Responders were defined as patients with a reduction in clinical pain score of 30% or more after three weeks of study treatment compared to baseline recordings. Prior to study medication, pain thresholds to electric skin and pressure stimulation were measured in dermatomes T10 (pancreatic area) and C5 (control area). To eliminate inter-subject differences in absolute pain thresholds an index of sensitivity between stimulation areas was determined (ratio of pain detection thresholds in pancreatic versus control area, ePDT ratio). Pain modulation was recorded by a conditioned pain modulation paradigm. A support vector machine was used to screen sensory parameters for their predictive power of pregabalin efficacy. Results The pregabalin responders group was hypersensitive to electric tetanic stimulation of the pancreatic area (ePDT ratio 1.2 (0.9–1.3)) compared to non-responders group (ePDT ratio: 1.6 (1.5–2.0)) (P = 0.001). The electrical pain detection ratio was predictive for pregabalin effect with a classification accuracy of 83.9% (P = 0.007). The corresponding sensitivity was 87.5% and specificity was 80.0%. No other parameters were predictive of pregabalin or placebo efficacy. Conclusions The present study provides first evidence that quantitative sensory testing predicts the analgesic effect of pregabalin in patients with painful chronic pancreatitis. The method can be used to tailor pain medication based on patient’s individual sensory profile and thus comprises a significant step towards personalized pain medicine. PMID:23469256

  17. Predicting potentially toxigenic Pseudo-nitzschia blooms in the Chesapeake Bay

    USGS Publications Warehouse

    Anderson, C.R.; Sapiano, M.R.P.; Prasad, M.B.K.; Long, W.; Tango, P.J.; Brown, C.W.; Murtugudde, R.

    2010-01-01

    Harmful algal blooms are now recognized as a significant threat to the Chesapeake Bay as they can severely compromise the economic viability of important recreational and commercial fisheries in the largest estuary of the United States. This study describes the development of empirical models for the potentially domoic acid-producing Pseudo-nitzschia species complex present in the Bay, developed from a 22-year time series of cell abundance and concurrent measurements of hydrographic and chemical properties. Using a logistic Generalized Linear Model (GLM) approach, model parameters and performance were compared over a range of Pseudo-nitzschia bloom thresholds relevant to toxin production by different species. Small-threshold blooms (???10cellsmL-1) are explained by time of year, location, and variability in surface values of phosphate, temperature, nitrate plus nitrite, and freshwater discharge. Medium- (100cellsmL-1) to large- threshold (1000cellsmL-1) blooms are further explained by salinity, silicic acid, dissolved organic carbon, and light attenuation (Secchi) depth. These predictors are similar to other models for Pseudo-nitzschia blooms on the west coast, suggesting commonalities across ecosystems. Hindcasts of bloom probabilities at a 19% bloom prediction point yield a Heidke Skill Score of -53%, a Probability of Detection ~75%, a False Alarm Ratio of ~52%, and a Probability of False Detection ~9%. The implication of possible future changes in Baywide nutrient stoichiometry on Pseudo-nitzschia blooms is discussed. ?? 2010 Elsevier B.V.

  18. Identification of ecological thresholds from variations in phytoplankton communities among lakes: contribution to the definition of environmental standards.

    PubMed

    Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc

    2016-04-01

    In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.

  19. Blood pressure-to-height ratio for screening prehypertension and hypertension in Chinese children.

    PubMed

    Dong, B; Wang, Z; Wang, H-J; Ma, J

    2015-10-01

    The diagnosis of hypertension in children is complicated because of the multiple age-, sex- and height-specific thresholds. To simplify the process of diagnosis, blood pressure-to-height ratio (BPHR) was employed in this study. Data were obtained from a Chinese national survey conducted in 2010, and 197 191 children aged 7-17 years were included. High normal and elevated blood pressure (BP) were defined according to the National High Blood Pressure Education Program (NHBPEP) Working Group definition. The optimal thresholds were selected by Youden's index. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and area under the curve (AUC) were assessed for the performance of these thresholds. The systolic and diastolic BPHR thresholds for identifying high normal BP were 0.84/0.55, 0.78/0.50 and 0.75/0.46 for children aged 7-8 years, 9-11 years and 12-17 years, respectively. The corresponding thresholds for identifying elevated BP were 0.87/0.57, 0.81/0.53 and 0.76/0.49, respectively. These proposed thresholds revealed high sensitivity and NPVs, all above 0.96, moderate to high specificity and AUCs, and low PPVs. Our finding suggested the proposed BPHR thresholds were accurate for identifying children without high normal or elevated BP, and could be employed to simplify the procedure of screening prehypertension and hypertension in children.

  20. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  1. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  2. Detection in fixed and random noise in foveal and parafoveal vision explained by template learning

    NASA Technical Reports Server (NTRS)

    Beard, B. L.; Ahumada, A. J. Jr; Watson, A. B. (Principal Investigator)

    1999-01-01

    Foveal and parafoveal contrast detection thresholds for Gabor and checkerboard targets were measured in white noise by means of a two-interval forced-choice paradigm. Two white-noise conditions were used: fixed and twin. In the fixed noise condition a single noise sample was presented in both intervals of all the trials. In the twin noise condition the same noise sample was used in the two intervals of a trial, but a new sample was generated for each trial. Fixed noise conditions usually resulted in lower thresholds than twin noise. Template learning models are presented that attribute this advantage of fixed over twin noise either to fixed memory templates' reducing uncertainty by incorporation of the noise or to the introduction, by the learning process itself, of more variability in the twin noise condition. Quantitative predictions of the template learning process show that it contributes to the accelerating nonlinear increase in performance with signal amplitude at low signal-to-noise ratios.

  3. Poor outcome prediction by burst suppression ratio in adults with post-anoxic coma without hypothermia.

    PubMed

    Yang, Qinglin; Su, Yingying; Hussain, Mohammed; Chen, Weibi; Ye, Hong; Gao, Daiquan; Tian, Fei

    2014-05-01

    Burst suppression ratio (BSR) is a quantitative electroencephalography (qEEG) parameter. The purpose of our study was to compare the accuracy of BSR when compared to other EEG parameters in predicting poor outcomes in adults who sustained post-anoxic coma while not being subjected to therapeutic hypothermia. EEG was registered and recorded at least once within 7 days of post-anoxic coma onset. Electrodes were placed according to the international 10-20 system, using a 16-channel layout. Each EEG expert scored raw EEG using a grading scale adapted from Young and scored amplitude-integrated electroencephalography tracings, in addition to obtaining qEEG parameters defined as BSR with a defined threshold. Glasgow outcome scales of 1 and 2 at 3 months, determined by two blinded neurologists, were defined as poor outcome. Sixty patients with Glasgow coma scale score of 8 or less after anoxic accident were included. The sensitivity (97.1%), specificity (73.3%), positive predictive value (82.5%), and negative prediction value (95.0%) of BSR in predicting poor outcome were higher than other EEG variables. BSR1 and BSR2 were reliable in predicting death (area under the curve > 0.8, P < 0.05), with the respective cutoff points being 39.8% and 61.6%. BSR1 was reliable in predicting poor outcome (area under the curve  =  0.820, P < 0.05) with a cutoff point of 23.9%. BSR1 was also an independent predictor of increased risk of death (odds ratio  =  1.042, 95% confidence intervals: 1.012-1.073, P  =  0.006). BSR may be a better predictor in prognosticating poor outcomes in patients with post-anoxic coma who do not undergo therapeutic hypothermia when compared to other qEEG parameters.

  4. Variability in expression of anadromy by female Oncorhynchus mykiss within a river network

    USGS Publications Warehouse

    Mills, Justin S.; Dunham, Jason B.; Reeves, Gordon H.; McMillan, John R.; Zimmerman, Christian E.; Jordan, Chris E.

    2012-01-01

    We described and predicted spatial variation in marine migration (anadromy) of female Oncorhynchus mykiss in the John Day River watershed, Oregon. We collected 149 juvenile O. mykiss across 72 sites and identified locations used by anadromous females by assigning maternal origin (anadromous versus non-anadromous) to each juvenile. These assignments used comparisons of strontium to calcium ratios in otolith primordia and freshwater growth regions to indicate maternal origin. We used logistic regression to predict probability of anadromy in relation to mean annual stream runoff using data from a subset of individuals. This model correctly predicted anadromy in a second sample of individuals with a moderate level of accuracy (e.g., 68% correctly predicted with a 0.5 classification threshold). Residuals from the models were not spatially autocorrelated, suggesting that remaining variability in the expression of anadromy was due to localized influences, as opposed to broad-scale gradients unrelated to mean annual stream runoff. These results are important for the management of O. mykiss because anadromous individuals (steelhead) within the John Day River watershed are listed as a threatened species, and it is difficult to discern juvenile steelhead from non-anadromous individuals (rainbow trout) in the field. Our results provide a broad-scale description and prediction of locations supporting anadromy, and new insight for habitat restoration, monitoring, and research to better manage and understand the expression of anadromy in O. mykiss.

  5. A study of life prediction differences for a nickel-base Alloy 690 using a threshold and a non-threshold model

    NASA Astrophysics Data System (ADS)

    Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.

    2009-10-01

    In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.

  6. Mean Platelet Volume (MPV), Platelet Distribution Width (PDW), Platelet Count and Plateletcrit (PCT) as predictors of in-hospital paediatric mortality: a case-control Study.

    PubMed

    Golwala, Zainab Mohammedi; Shah, Hardik; Gupta, Neeraj; Sreenivas, V; Puliyel, Jacob M

    2016-06-01

    Thrombocytopenia has been shown to predict mortality. We hypothesize that platelet indices may be more useful prognostic indicators. Our study subjects were children one month to 14 years old admitted to our hospital. To determine whether platelet count, plateletcrit (PCT), mean platelet volume (MPV) and platelet distribution width (PDW) and their ratios can predict mortality in hospitalised children. Children who died during hospital stay were the cases. Controls were age matched children admitted contemporaneously. The first blood sample after admission was used for analysis. Receiver operating characteristic (ROC) curve was used to identify the best threshold for measured variables and the ratios studied. Multiple regression analysis was done to identify independent predictors of mortality. Forty cases and forty controls were studied. Platelet count, PCT and the ratios of MPV/Platelet count, MPV/PCT, PDW/Platelet count, PDW/PCT and MPV × PDW/Platelet count × PCT were significantly different among children who survived compared to those who died. On multiple regression analysis the ratio of MPV/PCT, PDW/Platelet count and MPV/Platelet count were risk factors for mortality with an odds ratio of 4.31(95% CI, 1.69-10.99), 3.86 (95% CI, 1.53-9.75), 3.45 (95% CI, 1.38-8.64) respectively. In 67% of the patients who died MPV/PCT ratio was above 41.8 and PDW/Platelet count was above 3.86. In 65% of patients who died MPV/Platelet count was above 3.45. The MPV/PCT, PDW/Platelet count and MPV/Platelet count, in the first sample after admission in this case control study were predictors of mortality and could predict 65% to 67% of deaths accurately.

  7. Physical Performance Measures Associated With Locomotive Syndrome in Middle-Aged and Older Japanese Women.

    PubMed

    Nakamura, Misa; Hashizume, Hiroshi; Oka, Hiroyuki; Okada, Morihiro; Takakura, Rie; Hisari, Ayako; Yoshida, Munehito; Utsunomiya, Hirotoshi

    2015-01-01

    The Japanese Orthopaedic Association proposed a concept called locomotive syndrome (LS) to identify middle-aged and older adults at high risk of requiring health care services because of problems with locomotion. It is important to identify factors associated with the development of LS. Physical performance measures such as walking speed and standing balance are highly predictive of subsequent disability and mortality in older adults. However, there is little evidence about the relationship between physical performance measures and LS. To determine the physical performance measures associated with LS, the threshold values for discriminating individuals with and without LS, and the odds ratio of LS according to performance greater than or less than these thresholds in middle-aged and older Japanese women. Participants were 126 Japanese women (mean age = 61.8 years). Locomotive syndrome was defined as a score of 16 or more on the 25-question Geriatric Locomotive Function Scale. Physical performance was evaluated using grip strength, unipedal stance time with eyes open, seated toe-touch, and normal and fast 6-m walk time (6 MWT). Variables were compared between LS and non-LS groups. Fourteen participants (11.1%) were classed as having LS. Unipedal stance time, normal 6 MWT, and fast 6 MWT were significantly different between the 2 groups. The LS group had a shorter unipedal stance time and a longer normal and fast 6 MWT than the non-LS group. For these 3 variables, the area under the receiver operating characteristic curve was greater than 0.7, and the threshold for discriminating the non-LS and LS groups was 15 s for unipedal stance time, 4.8 s for normal 6 MWT and 3.6 s for fast 6 MWT. These variables were entered into a multiple logistic regression analysis, which indicated that unipedal stance time less than 15 s was significantly related to LS (odds ratio = 8.46; P < .01). Unipedal stance time was the physical performance measure that was most strongly associated with LS. This measure may be useful for early detection of LS.

  8. Predicting potentially toxigenic Pseudo-nitzschia blooms in the Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Anderson, Clarissa R.; Sapiano, Mathew R. P.; Prasad, M. Bala Krishna; Long, Wen; Tango, Peter J.; Brown, Christopher W.; Murtugudde, Raghu

    2010-11-01

    Harmful algal blooms are now recognized as a significant threat to the Chesapeake Bay as they can severely compromise the economic viability of important recreational and commercial fisheries in the largest estuary of the United States. This study describes the development of empirical models for the potentially domoic acid-producing Pseudo-nitzschia species complex present in the Bay, developed from a 22-year time series of cell abundance and concurrent measurements of hydrographic and chemical properties. Using a logistic Generalized Linear Model (GLM) approach, model parameters and performance were compared over a range of Pseudo-nitzschia bloom thresholds relevant to toxin production by different species. Small-threshold blooms (≥10 cells mL -1) are explained by time of year, location, and variability in surface values of phosphate, temperature, nitrate plus nitrite, and freshwater discharge. Medium- (100 cells mL -1) to large- threshold (1000 cells mL -1) blooms are further explained by salinity, silicic acid, dissolved organic carbon, and light attenuation (Secchi) depth. These predictors are similar to other models for Pseudo-nitzschia blooms on the west coast, suggesting commonalities across ecosystems. Hindcasts of bloom probabilities at a 19% bloom prediction point yield a Heidke Skill Score of ~53%, a Probability of Detection ˜ 75%, a False Alarm Ratio of ˜ 52%, and a Probability of False Detection ˜9%. The implication of possible future changes in Baywide nutrient stoichiometry on Pseudo-nitzschia blooms is discussed.

  9. A principled approach to setting optimal diagnostic thresholds: where ROC and indifference curves meet.

    PubMed

    Irwin, R John; Irwin, Timothy C

    2011-06-01

    Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  10. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  11. Predicting the susceptibility to gully initiation in data-poor regions

    NASA Astrophysics Data System (ADS)

    Dewitte, Olivier; Daoudi, Mohamed; Bosco, Claudio; Van Den Eeckhaut, Miet

    2015-01-01

    Permanent gullies are common features in many landscapes and quite often they represent the dominant soil erosion process. Once a gully has initiated, field evidence shows that gully channel formation and headcut migration rapidly occur. In order to prevent the undesired effects of gullying, there is a need to predict the places where new gullies might initiate. From detailed field measurements, studies have demonstrated strong inverse relationships between slope gradient of the soil surface (S) and drainage area (A) at the point of channel initiation across catchments in different climatic and morphological environments. Such slope-area thresholds (S-A) can be used to predict locations in the landscape where gullies might initiate. However, acquiring S-A requires detailed field investigations and accurate high resolution digital elevation data, which are usually difficult to acquire. To circumvent this issue, we propose a two-step method that uses published S-A thresholds and a logistic regression analysis (LR). S-A thresholds from the literature are used as proxies of field measurement. The method is calibrated and validated on a watershed, close to the town of Algiers, northern Algeria, where gully erosion affects most of the slopes. The gullies extend up to several kilometres in length and cover 16% of the study area. First we reconstruct the initiation areas of the existing gullies by applying S-A thresholds for similar environments. Then, using the initiation area map as the dependent variable with combinations of topographic and lithological predictor variables, we calibrate several LR models. It provides relevant results in terms of statistical reliability, prediction performance, and geomorphological significance. This method using S-A thresholds with data-driven assessment methods like LR proves to be efficient when applied to common spatial data and establishes a methodology that will allow similar studies to be undertaken elsewhere.

  12. Clinical predictors of conversion to bipolar disorder in a prospective longitudinal familial high-risk sample: focus on depressive features.

    PubMed

    Frankland, Andrew; Roberts, Gloria; Holmes-Preston, Ellen; Perich, Tania; Levy, Florence; Lenroot, Rhoshel; Hadzi-Pavlovic, Dusan; Breakspear, Michael; Mitchell, Philip B

    2017-11-07

    Identifying clinical features that predict conversion to bipolar disorder (BD) in those at high familial risk (HR) would assist in identifying a more focused population for early intervention. In total 287 participants aged 12-30 (163 HR with a first-degree relative with BD and 124 controls (CONs)) were followed annually for a median of 5 years. We used the baseline presence of DSM-IV depressive, anxiety, behavioural and substance use disorders, as well as a constellation of specific depressive symptoms (as identified by the Probabilistic Approach to Bipolar Depression) to predict the subsequent development of hypo/manic episodes. At baseline, HR participants were significantly more likely to report ⩾4 Probabilistic features (40.4%) when depressed than CONs (6.7%; p < .05). Nineteen HR subjects later developed either threshold (n = 8; 4.9%) or subthreshold (n = 11; 6.7%) hypo/mania. The presence of ⩾4 Probabilistic features was associated with a seven-fold increase in the risk of 'conversion' to threshold BD (hazard ratio = 6.9, p < .05) above and beyond the fourteen-fold increase in risk related to major depressive episodes (MDEs) per se (hazard ratio = 13.9, p < .05). Individual depressive features predicting conversion were psychomotor retardation and ⩾5 MDEs. Behavioural disorders only predicted conversion to subthreshold BD (hazard ratio = 5.23, p < .01), while anxiety and substance disorders did not predict either threshold or subthreshold hypo/mania. This study suggests that specific depressive characteristics substantially increase the risk of young people at familial risk of BD going on to develop future hypo/manic episodes and may identify a more targeted HR population for the development of early intervention programs.

  13. Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.

    PubMed

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-02-01

    To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.

  14. Experimental evidence of a pathogen invasion threshold

    PubMed Central

    Krkošek, Martin

    2018-01-01

    Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876

  15. Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.

    PubMed

    Pires, J C M; Souza, A; Pavão, H G; Martins, F G

    2014-09-01

    The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.

  16. Assessing the detection capability of a dense infrasound network in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Che, Il-Young; Le Pichon, Alexis; Kim, Kwangsu; Shin, In-Cheol

    2017-08-01

    The Korea Infrasound Network (KIN) is a dense seismoacoustic array network consisting of eight small-aperture arrays with an average interarray spacing of ∼100 km. The processing of the KIN historical recordings over 10 yr in the 0.05-5 Hz frequency band shows that the dominant sources of signals are microbaroms and human activities. The number of detections correlates well with the seasonal and daily variability of the stratospheric wind dynamics. The quantification of the spatiotemporal variability of the KIN detection performance is simulated using a frequency-dependent semi-empirical propagation modelling technique. The average detection thresholds predicted for the region of interest by using both the KIN arrays and the International Monitoring System (IMS) infrasound station network at a given frequency of 1.6 Hz are estimated to be 5.6 and 10.0 Pa for two- and three-station coverage, respectively, which was about three times lower than the thresholds predicted by using only the IMS stations. The network performance is significantly enhanced from May to August, with detection thresholds being one order of magnitude lower than the rest of the year due to prevailing steady stratospheric winds. To validate the simulations, the amplitudes of ground-truth repeated surface mining explosions at an open-pit limestone mine were measured over a 19-month period. Focusing on the spatiotemporal variability of the stratospheric winds which control to first order where infrasound signals are expected to be detected, the predicted detectable signal amplitude at the mine and the detection capability at one KIN array located at a distance of 175 km are found to be in good agreement with the observations from the measurement campaign. The detection threshold in summer is ∼2 Pa and increases up to ∼300 Pa in winter. Compared with the low and stable thresholds in summer, the high temporal variability of the KIN performance is well predicted throughout the year. Simulations show that the performance of the global infrasound network of the IMS is significantly improved by adding KIN. This study shows the usefulness of dense regional networks to enhance detection capability in regions of interest in the context of future verification of the Comprehensive Nuclear-Test-Ban Treaty.

  17. Polarization asymmetry in two-electron photodetachment - A cogent test of the ionization threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.; Bhatia, A. K.

    1988-01-01

    A very sensitive test of the electron-atom ionization threshold law is suggested: for spin-aligned heavy negative ions it consists of measuring the polarization asymmetry A(PA) coming from double detachment by left- versus right-circularly polarized light. The respective yields are worked out for the Te(-) (5p)5 2P(3/2) ion. The Coulomb-dipole theory predicts A(PA) to be the ratio of two oscillating functions in sharp contrast to any power law (specifically that of Wannier, 1953) for which the ratio is expected to be a smooth function of energy.

  18. Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect

    NASA Technical Reports Server (NTRS)

    Strand, L. D.; Cohen, Norman S.

    1990-01-01

    Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.

  19. Lymph node ratio predicts disease-specific survival in melanoma patients.

    PubMed

    Xing, Yan; Badgwell, Brian D; Ross, Merrick I; Gershenwald, Jeffrey E; Lee, Jeffrey E; Mansfield, Paul F; Lucci, Anthony; Cormier, Janice N

    2009-06-01

    The objectives of this analysis were to compare various measures associated with lymph node (LN) dissection and to identify threshold values associated with disease-specific survival (DSS) outcomes in patients with melanoma. Patients with lymph node-positive melanoma who underwent therapeutic LN dissection of the neck, axilla, and inguinal region were identified from the SEER database (1988-2005). We performed Cox multivariate analyses to determine the impact of the total number of LNs removed, number of negative LNs removed, and LN ratio on DSS. Multivariate cut-point analyses were conducted for each anatomic region to identify the threshold values associated with the largest improvement in DSS. The LN ratio was significantly associated with DSS for all LN regions. The LN ratio thresholds resulting in the greatest difference in 5-year DSS were .07, .13, and .18 for neck, axillary, and inguinal regions, respectively, corresponding to 15, 8, and 6 LNs removed per positive lymph node. After adjustment for other clinicopathologic factors, the hazard ratios (HRs) were .53 (95% confidence interval [CI], .40 to .71) in the neck, .52 (95% CI, .42 to .65) in the axillary, and .47 (95% CI, .36 to .61) in the inguinal regions for patients who met the LN ratio threshold. Among the prognostic factors examined, LN ratio was the best indicator of the extent of LN dissection, regardless of anatomic nodal region. These data provide evidence-based guidelines for defining adequate LN dissections in melanoma patients. (c) 2009 American Cancer Society.

  20. Excitation-based and informational masking of a tonal signal in a four-tone masker.

    PubMed

    Leibold, Lori J; Hitchens, Jack J; Buss, Emily; Neff, Donna L

    2010-04-01

    This study examined contributions of peripheral excitation and informational masking to the variability in masking effectiveness observed across samples of multi-tonal maskers. Detection thresholds were measured for a 1000-Hz signal presented simultaneously with each of 25, four-tone masker samples. Using a two-interval, forced-choice adaptive task, thresholds were measured with each sample fixed throughout trial blocks for ten listeners. Average thresholds differed by as much as 26 dB across samples. An excitation-based model of partial loudness [Moore, B. C. J. et al. (1997). J. Audio Eng. Soc. 45, 224-237] was used to predict thresholds. These predictions accounted for a significant portion of variance in the data of several listeners, but no relation between the model and data was observed for many listeners. Moreover, substantial individual differences, on the order of 41 dB, were observed for some maskers. The largest individual differences were found for maskers predicted to produce minimal excitation-based masking. In subsequent conditions, one of five maskers was randomly presented in each interval. The difference in performance for samples with low versus high predicted thresholds was reduced in random compared to fixed conditions. These findings are consistent with a trading relation whereby informational masking is largest for conditions in which excitation-based masking is smallest.

  1. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  2. Effect of difference in occlusal contact area of mandibular free-end edentulous area implants on periodontal mechanosensitive threshold of adjacent premolars.

    PubMed

    Terauchi, Rie; Arai, Korenori; Tanaka, Masahiro; Kawazoe, Takayoshi; Baba, Shunsuke

    2015-01-01

    Implant treatment is believed to cause minimal invasion of remaining teeth. However, few studies have examined teeth adjacent to an implant region. Therefore, this study investigated the effect of occlusal contact size of implants on the periodontal mechanosensitive threshold of adjacent premolars. A cross-sectional study design was adopted. The Department of Oral Implantology, Osaka Dental University, was the setting where patients underwent implant treatment in the mandibular free-end edentulous area. The study population comprised of 87 patients (109 teeth) who underwent follow-up observation for at least 3 years following implant superstructure placement. As variables, age, sex, duration following superstructure placement, presence or absence of dental pulp, occlusal contact area, and periodontal mechanosensitive threshold were considered. The occlusal contact area was measured using Blue Silicone(®)and Bite Eye BE-I(®). Periodontal mechanosensitive threshold were measured using von Frey hair. As quantitative variables for periodontal mechanosensitive threshold, we divided subjects into two groups: normal (≤5 g) and high (≥5.1 g). For statistical analysis, we compared the two groups for the sensation thresholds using the Chi square test for categorical data and the Mann-Whitney U test for continuous volume data. For variables in which a significant difference was noted, we calculated the odds ratio (95 % confidence interval) and the effective dose. There were 93 teeth in the normal group and 16 teeth in the high group based on periodontal mechanosensitive threshold. Comparison of the two groups indicated no significant differences associated with age, sex, duration following superstructure placement, or presence or absence of dental pulp. A significant difference was noted with regard to occlusal contact area, with several high group subjects belonging to the small contact group (odds ratio: 4.75 [1.42-15.87]; effective dose: 0.29). The results of this study suggest an association between implant occlusal contact area and the periodontal mechanosensitive threshold of adjacent premolars. Smaller occlusal contact application resulted in an increased threshold. It appears that prosthodontic treatment should aim not only to improve occlusal function but also to maintain oromandibular function with regard to the preservation of remaining teeth.

  3. Use of high-frequency peak in spectral analysis of heart rate increment to improve screening of obstructive sleep apnoea.

    PubMed

    Poupard, Laurent; Court-Fortune, Isabelle; Pichot, Vincent; Chouchou, Florian; Barthélémy, Jean-Claude; Roche, Frédéric

    2011-12-01

    Several studies have correlated the ratio of the very low frequency power spectral density of heart rate increment (%VLFI) with obstructive sleep apnoea syndrome (OSAS). However, patients with impaired heart rate variability may exhibit large variations of heart rate increment (HRI) spectral pattern and alter the screening accuracy of the method. To overcome this limitation, the present study uses the high-frequency increment (HFI) peak in the HRI spectrum, which corresponds to the respiratory influence on RR variations over the frequency range 0.2 to 0.4 Hz. We evaluated 288 consecutive patients referred for snoring, observed nocturnal breathing cessation and/or daytime sleepiness. Patients were classified as OSAS if their apnoea plus hypopnoea index (AHI) during polysomnography exceeded 15 events per hour. Synchronized electrocardiogram Holter monitoring allowed HRI analysis. Using a %VLFI threshold >2.4% for identifying the presence of OSAS, sensitivity for OSAS was 74.9%, specificity 51%, positive predictive value 54.9% and negative predictive value 71.7% (33 false negative subjects). Using threshold for %VLFI >2.4% and HFI peak position >0.4 Hz, negative predictive value increased to 78.2% while maintaining specificity at 50.6%. Among 11 subjects with %VLFI <2.4% and HFI peak >0.4 Hz, nine demonstrated moderate to severe OSAS (AHI >30). HFI represents a minimal physiological criterion for applying %VLFI by ensuring that heart rate variations are band frequency limited.

  4. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis; Negri, Jacquelyn; Kean, Jason

    2016-04-01

    Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.

  5. Monitoring Start of Season in Alaska

    NASA Astrophysics Data System (ADS)

    Robin, J.; Dubayah, R.; Sparrow, E.; Levine, E.

    2006-12-01

    In biomes that have distinct winter seasons, start of spring phenological events, specifically timing of budburst and green-up of leaves, coincides with transpiration. Seasons leave annual signatures that reflect the dynamic nature of the hydrologic cycle and link the different spheres of the Earth system. This paper evaluates whether continuity between AVHRR and MODIS normalized difference vegetation index (NDVI) is achievable for monitoring land surface phenology, specifically start of season (SOS), in Alaska. Additionally, two thresholds, one based on NDVI and the other on accumulated growing degree-days (GDD), are compared to determine which most accurately predicts SOS for Fairbanks. Ratio of maximum greenness at SOS was computed from biweekly AVHRR and MODIS composites for 2001 through 2004 for Anchorage and Fairbanks regions. SOS dates were determined from annual green-up observations made by GLOBE students. Results showed that different processing as well as spectral characteristics of each sensor restrict continuity between the two datasets. MODIS values were consistently higher and had less inter-annual variability during the height of the growing season than corresponding AVHRR values. Furthermore, a threshold of 131-175 accumulated GDD was a better predictor of SOS for Fairbanks than a NDVI threshold applied to AVHRR and MODIS datasets. The NDVI threshold was developed from biweekly AVHRR composites from 1982 through 2004 and corresponding annual green-up observations at University of Alaska-Fairbanks (UAF). The GDD threshold was developed from 20+ years of historic daily mean air temperature data and the same green-up observations. SOS dates computed with the GDD threshold most closely resembled actual green-up dates observed by GLOBE students and UAF researchers. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska.

  6. Forecasting the probability of future groundwater levels declining below specified low thresholds in the conterminous U.S.

    USGS Publications Warehouse

    Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse

    2017-01-01

    We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.

  7. Linking Remotely Sensed Aerosol Types to Their Chemical Composition

    NASA Technical Reports Server (NTRS)

    Dawson, Kyle William; Kacenelenbogen, Meloe S.; Johnson, Matthew S.; Burton, Sharon P.; Hostetler, Chris A.; Meskhidze, Nicholas

    2016-01-01

    Aerosol types measured during the Ship-Aircraft Bio-Optical Research (SABOR) experiment are related to GEOS-Chem model chemical composition. The application for this procedure to link model chemical components to aerosol type is desirable for understanding aerosol evolution over time. The Mahalanobis distance (DM) statistic is used to cluster model groupings of five chemical components (organic carbon, black carbon, sea salt, dust and sulfate) in a way analogous to the methods used by Burton et al. [2012] and Russell et al. [2014]. First, model-to-measurement evaluation is performed by collocating vertically resolved aerosol extinction from SABOR High Spectral Resolution LiDAR (HSRL) to the GEOS-Chem nested high-resolution data. Comparisons of modeled-to-measured aerosol extinction are shown to be within 35% +/- 14%. Second, the model chemical components are calculation into five variables to calculate the DM and cluster means and covariances for each HSRL-retrieved aerosol type. The layer variables from the model are aerosol optical depth (AOD) ratios of (i) sea salt and (ii) dust to total AOD, mass ratios of (iii) total carbon (i.e. sum of organic and black carbon) to the sum of total carbon and sulfate (iv) organic carbon to black carbon, and (v) the natural log of the aerosol-to-molecular extinction ratio. Third, the layer variables and at most five out of twenty SABOR flights are used to form the pre-specified clusters for calculating DM and to assign an aerosol type. After determining the pre-specified clusters, model aerosol types are produced for the entire vertically resolved GEOS-Chem nested domain over the United States and the model chemical component distributions relating to each type are recorded. Resulting aerosol types are Dust/Dusty Mix, Maritime, Smoke, Urban and Fresh Smoke (separated into 'dark' and 'light' by a threshold of the organic to black carbon ratio). Model-calculated DM not belonging to a specific type (i.e. not meeting a threshold probability) is termed an outlier and those DM values that can belong to multiple types (i.e. showing weak probability of belonging to a specific cluster) are termed as Overlap. MODIS active fires are overlaid on the model domain to qualitatively evaluate the model-predicted Smoke aerosol types.

  8. Linking remotely sensed aerosol types to their chemical composition

    NASA Astrophysics Data System (ADS)

    Dawson, K. W.; Kacenelenbogen, M. S.; Johnson, M. S.; Burton, S. P.; Hostetler, C. A.; Meskhidze, N.

    2016-12-01

    Aerosol types measured during the Ship-Aircraft Bio-Optical Research (SABOR) experiment are related to GEOS-Chem model chemical composition. The application for this procedure to link model chemical components to aerosol type is desirable for understanding aerosol evolution over time. The Mahalanobis distance (DM) statistic is used to cluster model groupings of five chemical components (organic carbon, black carbon, sea salt, dust and sulfate) in a way analogous to the methods used by Burton et al. [2012] and Russell et al. [2014]. First, model-to-measurement evaluation is performed by collocating vertically resolved aerosol extinction from SABOR High Spectral Resolution LiDAR (HSRL) to the GEOS-Chem nested high-resolution data. Comparisons of modeled-to-measured aerosol extinction are shown to be within 35% ± 14%. Second, the model chemical components are calculation into five variables to calculate the DM and cluster means and covariances for each HSRL-retrieved aerosol type. The layer variables from the model are aerosol optical depth (AOD) ratios of (i) sea salt and (ii) dust to total AOD, mass ratios of (iii) total carbon (i.e. sum of organic and black carbon) to the sum of total carbon and sulfate (iv) organic carbon to black carbon, and (v) the natural log of the aerosol-to-molecular extinction ratio. Third, the layer variables and at most five out of twenty SABOR flights are used to form the pre-specified clusters for calculating DM and to assign an aerosol type. After determining the pre-specified clusters, model aerosol types are produced for the entire vertically resolved GEOS-Chem nested domain over the United States and the model chemical component distributions relating to each type are recorded. Resulting aerosol types are Dust/Dusty Mix, Maritime, Smoke, Urban and Fresh Smoke (separated into `dark' and `light' by a threshold of the organic to black carbon ratio). Model-calculated DM not belonging to a specific type (i.e. not meeting a threshold probability) is termed an outlier and those DM values that can belong to multiple types (i.e. showing weak probability of belonging to a specific cluster) are termed as Overlap. MODIS active fires are overlaid on the model domain to qualitatively evaluate the model-predicted Smoke aerosol types.

  9. Auditory sensitivity of seals and sea lions in complex listening scenarios.

    PubMed

    Cunningham, Kane A; Southall, Brandon L; Reichmuth, Colleen

    2014-12-01

    Standard audiometric data, such as audiograms and critical ratios, are often used to inform marine mammal noise-exposure criteria. However, these measurements are obtained using simple, artificial stimuli-i.e., pure tones and flat-spectrum noise-while natural sounds typically have more complex structure. In this study, detection thresholds for complex signals were measured in (I) quiet and (II) masked conditions for one California sea lion (Zalophus californianus) and one harbor seal (Phoca vitulina). In Experiment I, detection thresholds in quiet conditions were obtained for complex signals designed to isolate three common features of natural sounds: Frequency modulation, amplitude modulation, and harmonic structure. In Experiment II, detection thresholds were obtained for the same complex signals embedded in two types of masking noise: Synthetic flat-spectrum noise and recorded shipping noise. To evaluate how accurately standard hearing data predict detection of complex sounds, the results of Experiments I and II were compared to predictions based on subject audiograms and critical ratios combined with a basic hearing model. Both subjects exhibited greater-than-predicted sensitivity to harmonic signals in quiet and masked conditions, as well as to frequency-modulated signals in masked conditions. These differences indicate that the complex features of naturally occurring sounds enhance detectability relative to simple stimuli.

  10. Percolation, phase separation, and gelation in fluids and mixtures of spheres and rods

    NASA Astrophysics Data System (ADS)

    Jadrich, Ryan; Schweizer, Kenneth S.

    2011-12-01

    The relationship between kinetic arrest, connectivity percolation, structure and phase separation in protein, nanoparticle, and colloidal suspensions is a rich and complex problem. Using a combination of integral equation theory, connectivity percolation methods, naïve mode coupling theory, and the activated dynamics nonlinear Langevin equation approach, we study this problem for isotropic one-component fluids of spheres and variable aspect ratio rigid rods, and also percolation in rod-sphere mixtures. The key control parameters are interparticle attraction strength and its (short) spatial range, total packing fraction, and mixture composition. For spherical particles, formation of a homogeneous one-phase kinetically stable and percolated physical gel is predicted to be possible, but depends on non-universal factors. On the other hand, the dynamic crossover to activated dynamics and physical bond formation, which signals discrete cluster formation below the percolation threshold, almost always occurs in the one phase region. Rods more easily gel in the homogeneous isotropic regime, but whether a percolation or kinetic arrest boundary is reached first upon increasing interparticle attraction depends sensitively on packing fraction, rod aspect ratio and attraction range. Overall, the connectivity percolation threshold is much more sensitive to attraction range than either the kinetic arrest or phase separation boundaries. Our results appear to be qualitatively consistent with recent experiments on polymer-colloid depletion systems and brush mediated attractive nanoparticle suspensions.

  11. Female turtles from hot nests: is it duration of incubation or proportion of development at high temperatures that matters?

    PubMed

    Georges, Arthur

    1989-11-01

    Mean daily temperature in natural nests of freshwater turtles with temperature-dependent sex determination is known to be a poor predictor of hatchling sex ratios when nest temperatures fluctuate. To account for this, a model was developed on the assumption that females will emerge from eggs when more than half of embryonic development occurs above the threshold temperature for sex determination rather than from eggs that spend more than half their time above the threshold. The model is consistent with previously published data and in particular explains the phenomenon whereby the mean temperature that best distinguishes between male and female nests decreases with increasing variability in nest temperature. The model, if verified by controlled experiments, has important implications for our understanding of temperature-dependent sex determination in natural nests. Both mean nest temperature and "hours spent above the threshold" will be poor predictors of hatchling sex ratios. Studies designed to investigate latitudinal trends and inter-specific differences in the threshold temperature will need to consider latitudinal and inter-specific variation in the magnitude of diel fluctuations in nest temperature, and variation in factors influencing the magnitude of those fluctuations, such as nest depth. Furthermore, any factor that modifies the relationship between developmental rate and temperature can be expected to influence hatchling sex ratios in natural nests, especially when nest temperatures are close to the threshold.

  12. Prediction of Individual Social-Demographic Role Based on Travel Behavior Variability Using Long-Term GPS Data

    DOE PAGES

    Zhu, Lei; Gonder, Jeffrey; Lin, Lei

    2017-08-16

    With the development of and advances in smartphones and global positioning system (GPS) devices, travelers’ long-term travel behaviors are not impossible to obtain. This study investigates the pattern of individual travel behavior and its correlation with social-demographic features. For different social-demographic groups (e.g., full-time employees and students), the individual travel behavior may have specific temporal-spatial-mobile constraints. The study first extracts the home-based tours, including Home-to-Home and Home-to-Non-Home, from long-term raw GPS data. The travel behavior pattern is then delineated by home-based tour features, such as departure time, destination location entropy, travel time, and driving time ratio. The travel behavior variabilitymore » describes the variances of travelers’ activity behavior features for an extended period. After that, the variability pattern of an individual’s travel behavior is used for estimating the individual’s social-demographic information, such as social-demographic role, by a supervised learning approach, support vector machine. In this study, a long-term (18-month) recorded GPS data set from Puget Sound Regional Council is used. The experiment’s result is very promising. In conclusion, the sensitivity analysis shows that as the number of tours thresholds increases, the variability of most travel behavior features converges, while the prediction performance may not change for the fixed test data.« less

  13. Prediction of Individual Social-Demographic Role Based on Travel Behavior Variability Using Long-Term GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Gonder, Jeffrey; Lin, Lei

    With the development of and advances in smartphones and global positioning system (GPS) devices, travelers’ long-term travel behaviors are not impossible to obtain. This study investigates the pattern of individual travel behavior and its correlation with social-demographic features. For different social-demographic groups (e.g., full-time employees and students), the individual travel behavior may have specific temporal-spatial-mobile constraints. The study first extracts the home-based tours, including Home-to-Home and Home-to-Non-Home, from long-term raw GPS data. The travel behavior pattern is then delineated by home-based tour features, such as departure time, destination location entropy, travel time, and driving time ratio. The travel behavior variabilitymore » describes the variances of travelers’ activity behavior features for an extended period. After that, the variability pattern of an individual’s travel behavior is used for estimating the individual’s social-demographic information, such as social-demographic role, by a supervised learning approach, support vector machine. In this study, a long-term (18-month) recorded GPS data set from Puget Sound Regional Council is used. The experiment’s result is very promising. In conclusion, the sensitivity analysis shows that as the number of tours thresholds increases, the variability of most travel behavior features converges, while the prediction performance may not change for the fixed test data.« less

  14. Relations that affect the probability and prediction of nitrate concentration in private wells in the glacial aquifer system in the United States

    USGS Publications Warehouse

    Warner, Kelly L.; Arnold, Terri L.

    2010-01-01

    Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.

  15. Predicting reactivity threshold in children with anaphylaxis to peanut.

    PubMed

    Reier-Nilsen, T; Michelsen, M M; Lødrup Carlsen, K C; Carlsen, K-H; Mowinckel, P; Nygaard, U C; Namork, E; Borres, M P; Håland, G

    2018-04-01

    Peanut allergy necessitates dietary restrictions, preferably individualized by determining reactivity threshold through an oral food challenge (OFC). However, risk of systemic reactions often precludes OFC in children with severe peanut allergy. We aimed to determine whether clinical and/or immunological characteristics were associated with reactivity threshold in children with anaphylaxis to peanut and secondarily, to investigate whether these characteristics were associated with severity of the allergic reaction during OFC. A double-blinded placebo-controlled food challenge (DBPCFC) with peanut was performed in 96 5- to 15-year-old children with a history of severe allergic reactions to peanut and/or sensitization to peanut (skin prick test [SPT] ≥3 mm or specific immunoglobulin E [s-IgE] ≥0.35 kUA/L). Investigations preceding the DBPCFC included a structured interview, SPT, lung function measurements, serological immunology assessment (IgE, IgG and IgG 4 ), basophil activation test (BAT) and conjunctival allergen provocation test (CAPT). International standards were used to define anaphylaxis and grade the allergic reaction during OFC. During DBPCFC, all 96 children (median age 9.3, range 5.1-15.2) reacted with anaphylaxis (moderate objective symptoms from at least two organ systems). Basophil activation (CD63 + basophils ≥15%), peanut SPT and the ratio of peanut s-IgE/total IgE were significantly associated with reactivity threshold and lowest observed adverse events level (LOAEL) (all P < .04). Basophil activation best predicted very low threshold level (<3 mg of peanut protein), with an optimal cut-off of 75.8% giving a 93.5% negative predictive value. None of the characteristics were significantly associated with the severity of allergic reaction. In children with anaphylaxis to peanut, basophil activation, peanut SPT and the ratio of peanut s-IgE/total IgE were associated with reactivity threshold and LOAEL, but not with allergy reaction severity. © 2017 John Wiley & Sons Ltd.

  16. Discussion about different cut-off values of conventional hamstring-to-quadriceps ratio used in hamstring injury prediction among professional male football players

    PubMed Central

    Michałowska, Martyna; Walczak, Tomasz; Owen, Adam; Grabski, Jakub Krzysztof; Pyda, Andrzej; Piontek, Tomasz; Kotwicki, Tomasz

    2017-01-01

    Objective To measure the sensitivity and specificity of differences cut-off values for isokinetic Hcon/Qcon ratio in order to improve the capacity to evaluate (retrospectively) the injury of hamstring muscles in professional soccer screened with knee isokinetic tests. Design Retrospective study. Methods Medical and biomechanical data of professional football players playing for the same team for at least one season between 2010 and 2016 were analysed. Hamstring strain injury cases and the reports generated via isokinetic testing were investigated. Isokinetic concentric(con) hamstring(H) and quadriceps(Q) absolute strength in addition with Hcon/Qcon ratio were examined for the injured versus uninjured limbs among injured players, and for the injured and non-injured players. 2 x 2 contingency table was used for comparing variables: predicted injured or predicted uninjured with actual injured or actual uninjured. Sensitivity, specificity, accuracy, positive and negative predictive values, and positive and negative likelihood ratio were calculated for three different cut-off values (0.47 vs. 0.6 vs. 0.658) to compare the discriminative power of an isokinetic test, whilst examining the key value of Hcon/Qcon ratio which may indicate the highest level of ability to predispose a player to injury. McNemar’s chi2 test with Yates’s correction was used to determine agreement between the tests. PQStat software was used for all statistical analysis, and an alpha level of p <0.05 was used for all statistical comparisons. Results 340 isokinetic test reports on both limbs of 66 professional soccer players were analysed. Eleven players suffered hamstring injuries during the analysed period. None of these players sustained recurrence of hamstring injury. One player sustained hamstring strain injury on both legs, thus the total number of injuries was 12. Application of different cut-off values for Hcon/Qcon significantly affected the sensitivity and specificity of isokinetic test used as a tool for muscle injury detection. The use of 0.47 of Hcon/Qcon as a discriminate value resulted in significantly lower sensitivity when compared to 0.658 threshold (sensitivity of 16.7% vs. 91.7%, respectively; t = 6.125,p = 0.0133). Calculated values of specificity (when three different cut-off were applied) were also significantly different. Threshold of 0.6 of Hcon/Qcon resulted with significantly lower specificity compared to 0.47 value (specificity of 46.9% vs. 94.5%, respectively; t = 153.0,p<0.0001), and significantly higher specificity when compared to 0.658 (specificity of 46.9% vs. 24.1%, respectively; t = 229.0, p<0.0001). Conclusion The use of different cut-off values for Hcon/Qcon significantly affected the sensitivity and specificity of isokinetic testing. The interpretation of usefulness of isokinetic test as a screening tool in a group of male professional football players to predict hamstring injury occurrence within the next 12 months might be therefore significantly biased due to the different threshold values of Hcon/Qcon. Using one “normative” value as a cut-off (e.g. 0.47 or 0.60, or 0.658) to quantify soccer players (or not) to the group with a higher risk of knee injury might result in biased outcomes due to the natural strength asymmetry that is observed within the group of soccer players. PMID:29216241

  17. Discussion about different cut-off values of conventional hamstring-to-quadriceps ratio used in hamstring injury prediction among professional male football players.

    PubMed

    Grygorowicz, Monika; Michałowska, Martyna; Walczak, Tomasz; Owen, Adam; Grabski, Jakub Krzysztof; Pyda, Andrzej; Piontek, Tomasz; Kotwicki, Tomasz

    2017-01-01

    To measure the sensitivity and specificity of differences cut-off values for isokinetic Hcon/Qcon ratio in order to improve the capacity to evaluate (retrospectively) the injury of hamstring muscles in professional soccer screened with knee isokinetic tests. Retrospective study. Medical and biomechanical data of professional football players playing for the same team for at least one season between 2010 and 2016 were analysed. Hamstring strain injury cases and the reports generated via isokinetic testing were investigated. Isokinetic concentric(con) hamstring(H) and quadriceps(Q) absolute strength in addition with Hcon/Qcon ratio were examined for the injured versus uninjured limbs among injured players, and for the injured and non-injured players. 2 x 2 contingency table was used for comparing variables: predicted injured or predicted uninjured with actual injured or actual uninjured. Sensitivity, specificity, accuracy, positive and negative predictive values, and positive and negative likelihood ratio were calculated for three different cut-off values (0.47 vs. 0.6 vs. 0.658) to compare the discriminative power of an isokinetic test, whilst examining the key value of Hcon/Qcon ratio which may indicate the highest level of ability to predispose a player to injury. McNemar's chi2 test with Yates's correction was used to determine agreement between the tests. PQStat software was used for all statistical analysis, and an alpha level of p <0.05 was used for all statistical comparisons. 340 isokinetic test reports on both limbs of 66 professional soccer players were analysed. Eleven players suffered hamstring injuries during the analysed period. None of these players sustained recurrence of hamstring injury. One player sustained hamstring strain injury on both legs, thus the total number of injuries was 12. Application of different cut-off values for Hcon/Qcon significantly affected the sensitivity and specificity of isokinetic test used as a tool for muscle injury detection. The use of 0.47 of Hcon/Qcon as a discriminate value resulted in significantly lower sensitivity when compared to 0.658 threshold (sensitivity of 16.7% vs. 91.7%, respectively; t = 6.125,p = 0.0133). Calculated values of specificity (when three different cut-off were applied) were also significantly different. Threshold of 0.6 of Hcon/Qcon resulted with significantly lower specificity compared to 0.47 value (specificity of 46.9% vs. 94.5%, respectively; t = 153.0,p<0.0001), and significantly higher specificity when compared to 0.658 (specificity of 46.9% vs. 24.1%, respectively; t = 229.0, p<0.0001). The use of different cut-off values for Hcon/Qcon significantly affected the sensitivity and specificity of isokinetic testing. The interpretation of usefulness of isokinetic test as a screening tool in a group of male professional football players to predict hamstring injury occurrence within the next 12 months might be therefore significantly biased due to the different threshold values of Hcon/Qcon. Using one "normative" value as a cut-off (e.g. 0.47 or 0.60, or 0.658) to quantify soccer players (or not) to the group with a higher risk of knee injury might result in biased outcomes due to the natural strength asymmetry that is observed within the group of soccer players.

  18. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  19. Discharge variability and bedrock river incision on the Hawaiian island of Kaua'i

    NASA Astrophysics Data System (ADS)

    Huppert, K.; Deal, E.; Perron, J. T.; Ferrier, K.; Braun, J.

    2017-12-01

    Bedrock river incision occurs during floods that generate sufficient shear stress to strip riverbeds of sediment cover and erode underlying bedrock. Thresholds for incision can prevent erosion at low flows and slow down erosion at higher flows that do generate excess shear stress. Because discharge distributions typically display power-law tails, with non-negligible frequencies of floods much greater than the mean, models incorporating stochastic discharge and incision thresholds predict that discharge variability can sometimes have greater effects on long-term incision rates than mean discharge. This occurs when the commonly observed inverse scalings between mean discharge and discharge variability are weak or when incision thresholds are high. Because the effects of thresholds and discharge variability have only been documented in a few locations, their influence on long-term river incision rates remains uncertain. The Hawaiian island of Kaua'i provides an ideal natural laboratory to evaluate the effects of discharge variability and thresholds on bedrock river incision because it has one of Earth's steepest spatial gradients in mean annual rainfall and it also experiences dramatic spatial variations in rainfall and discharge variability, spanning a wide range of the conditions reported on Earth. Kaua'i otherwise has minimal variations in lithology, vertical motion, and other factors that can influence erosion. River incision rates averaged over 1.5 - 4.5 Myr timescales can be estimated along the lengths of Kauaian channels from the depths of river canyons and lava flow ages. We characterize rainfall and discharge variability on Kaua'i using records from an extensive network of rain and stream gauges spanning the past century. We use these characterizations to model long-term bedrock river incision along Kauaian channels with a threshold-dependent incision law, modulated by site-specific discharge-channel width scalings. Our comparisons between modeled and observed erosion rates suggest that variations in river incision rates on Kaua'i are dominated by variations in mean rainfall and discharge, rather than by differences in storminess across the island. We explore the implications of this result for the threshold dependence of river incision across Earth's varied climates.

  20. Long-term follow-up after near-infrared spectroscopy coronary imaging: Insights from the lipid cORe plaque association with CLinical events (ORACLE-NIRS) registry.

    PubMed

    Danek, Barbara Anna; Karatasakis, Aris; Karacsonyi, Judit; Alame, Aya; Resendes, Erica; Kalsaria, Pratik; Nguyen-Trong, Phuong-Khanh J; Rangan, Bavana V; Roesle, Michele; Abdullah, Shuaib; Banerjee, Subhash; Brilakis, Emmanouil S

    Coronary lipid core plaque may be associated with the incidence of subsequent cardiovascular events. We analyzed outcomes of 239 patients who underwent near-infrared spectroscopy (NIRS) coronary imaging between 2009-2011. Multivariable Cox regression was used to identify variables independently associated with the incidence of major adverse cardiovascular events (MACE; cardiac mortality, acute coronary syndromes (ACS), stroke, and unplanned revascularization) during follow-up. Mean patient age was 64±9years, 99% were men, and 50% were diabetic, presenting with stable coronary artery disease (61%) or an acute coronary syndrome (ACS, 39%). Target vessel pre-stenting median lipid core burden index (LCBI) was 88 [interquartile range, IQR 50-130]. Median LCBI in non-target vessels was 57 [IQR 26-94]. Median follow-up was 5.3years. The 5-year MACE rate was 37.5% (cardiac mortality was 15.0%). On multivariable analysis the following variables were associated with MACE: diabetes mellitus, prior percutaneous coronary intervention performed at index angiography, and non-target vessel LCBI. Non-target vessel LCBI of 77 was determined using receiver-operating characteristic curve analysis to be a threshold for prediction of MACE in our cohort. The adjusted hazard ratio (HR) for non-target vessel LCBI ≥77 was 14.05 (95% confidence interval (CI) 2.47-133.51, p=0.002). The 5-year cumulative incidence of events in the above-threshold group was 58.0% vs. 13.1% in the below-threshold group. During long-term follow-up of patients who underwent NIRS imaging, high LCBI in a non-PCI target vessel was associated with increased incidence of MACE. Published by Elsevier Inc.

  1. Considering the filler network as a third phase in polymer/CNT nanocomposites to predict the tensile modulus using Hashin-Hansen model

    NASA Astrophysics Data System (ADS)

    Kim, Sanghoon; Jamalzadeh, Navid; Zare, Yasser; Hui, David; Rhee, Kyong Yop

    2018-07-01

    In this paper, a conventional Hashin-Hansen model is developed to analyze the tensile modulus of polymer/CNT nanocomposites above the percolation threshold. This model for composites containing dispersed particles utilizes the aspect ratio of the nanofiller (α), the number of nanotubes per unit area (N), the percolation threshold (φp) and the modulus of the filler network (EN), assuming that the filler network constitutes a third phase in the nanocomposites. The experimental results and the predictions agree well, verifying the proposed relations between the modulus and the other parameters in the Hashin-Hansen model. Moreover, large values of "α", "N" and "EN" result in an improved modulus of the polymer/CNT nanocomposites, while a low percolation threshold results in a high modulus.

  2. Wide brick tunnel randomization - an unequal allocation procedure that limits the imbalance in treatment totals.

    PubMed

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-04-30

    In open-label studies, partial predictability of permuted block randomization provides potential for selection bias. To lessen the selection bias in two-arm studies with equal allocation, a number of allocation procedures that limit the imbalance in treatment totals at a pre-specified level but do not require the exact balance at the ends of the blocks were developed. In studies with unequal allocation, however, the task of designing a randomization procedure that sets a pre-specified limit on imbalance in group totals is not resolved. Existing allocation procedures either do not preserve the allocation ratio at every allocation or do not include all allocation sequences that comply with the pre-specified imbalance threshold. Kuznetsova and Tymofyeyev described the brick tunnel randomization for studies with unequal allocation that preserves the allocation ratio at every step and, in the two-arm case, includes all sequences that satisfy the smallest possible imbalance threshold. This article introduces wide brick tunnel randomization for studies with unequal allocation that allows all allocation sequences with imbalance not exceeding any pre-specified threshold while preserving the allocation ratio at every step. In open-label studies, allowing a larger imbalance in treatment totals lowers selection bias because of the predictability of treatment assignments. The applications of the technique in two-arm and multi-arm open-label studies with unequal allocation are described. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Gene Expression Ratios Lead to Accurate and Translatable Predictors of DR5 Agonism across Multiple Tumor Lineages.

    PubMed

    Reddy, Anupama; Growney, Joseph D; Wilson, Nick S; Emery, Caroline M; Johnson, Jennifer A; Ward, Rebecca; Monaco, Kelli A; Korn, Joshua; Monahan, John E; Stump, Mark D; Mapa, Felipa A; Wilson, Christopher J; Steiger, Janine; Ledell, Jebediah; Rickles, Richard J; Myer, Vic E; Ettenberg, Seth A; Schlegel, Robert; Sellers, William R; Huet, Heather A; Lehár, Joseph

    2015-01-01

    Death Receptor 5 (DR5) agonists demonstrate anti-tumor activity in preclinical models but have yet to demonstrate robust clinical responses. A key limitation may be the lack of patient selection strategies to identify those most likely to respond to treatment. To overcome this limitation, we screened a DR5 agonist Nanobody across >600 cell lines representing 21 tumor lineages and assessed molecular features associated with response. High expression of DR5 and Casp8 were significantly associated with sensitivity, but their expression thresholds were difficult to translate due to low dynamic ranges. To address the translational challenge of establishing thresholds of gene expression, we developed a classifier based on ratios of genes that predicted response across lineages. The ratio classifier outperformed the DR5+Casp8 classifier, as well as standard approaches for feature selection and classification using genes, instead of ratios. This classifier was independently validated using 11 primary patient-derived pancreatic xenograft models showing perfect predictions as well as a striking linearity between prediction probability and anti-tumor response. A network analysis of the genes in the ratio classifier captured important biological relationships mediating drug response, specifically identifying key positive and negative regulators of DR5 mediated apoptosis, including DR5, CASP8, BID, cFLIP, XIAP and PEA15. Importantly, the ratio classifier shows translatability across gene expression platforms (from Affymetrix microarrays to RNA-seq) and across model systems (in vitro to in vivo). Our approach of using gene expression ratios presents a robust and novel method for constructing translatable biomarkers of compound response, which can also probe the underlying biology of treatment response.

  4. Gene Expression Ratios Lead to Accurate and Translatable Predictors of DR5 Agonism across Multiple Tumor Lineages

    PubMed Central

    Reddy, Anupama; Growney, Joseph D.; Wilson, Nick S.; Emery, Caroline M.; Johnson, Jennifer A.; Ward, Rebecca; Monaco, Kelli A.; Korn, Joshua; Monahan, John E.; Stump, Mark D.; Mapa, Felipa A.; Wilson, Christopher J.; Steiger, Janine; Ledell, Jebediah; Rickles, Richard J.; Myer, Vic E.; Ettenberg, Seth A.; Schlegel, Robert; Sellers, William R.

    2015-01-01

    Death Receptor 5 (DR5) agonists demonstrate anti-tumor activity in preclinical models but have yet to demonstrate robust clinical responses. A key limitation may be the lack of patient selection strategies to identify those most likely to respond to treatment. To overcome this limitation, we screened a DR5 agonist Nanobody across >600 cell lines representing 21 tumor lineages and assessed molecular features associated with response. High expression of DR5 and Casp8 were significantly associated with sensitivity, but their expression thresholds were difficult to translate due to low dynamic ranges. To address the translational challenge of establishing thresholds of gene expression, we developed a classifier based on ratios of genes that predicted response across lineages. The ratio classifier outperformed the DR5+Casp8 classifier, as well as standard approaches for feature selection and classification using genes, instead of ratios. This classifier was independently validated using 11 primary patient-derived pancreatic xenograft models showing perfect predictions as well as a striking linearity between prediction probability and anti-tumor response. A network analysis of the genes in the ratio classifier captured important biological relationships mediating drug response, specifically identifying key positive and negative regulators of DR5 mediated apoptosis, including DR5, CASP8, BID, cFLIP, XIAP and PEA15. Importantly, the ratio classifier shows translatability across gene expression platforms (from Affymetrix microarrays to RNA-seq) and across model systems (in vitro to in vivo). Our approach of using gene expression ratios presents a robust and novel method for constructing translatable biomarkers of compound response, which can also probe the underlying biology of treatment response. PMID:26378449

  5. Predicting pain relief: Use of pre-surgical trigeminal nerve diffusion metrics in trigeminal neuralgia.

    PubMed

    Hung, Peter S-P; Chen, David Q; Davis, Karen D; Zhong, Jidan; Hodaie, Mojgan

    2017-01-01

    Trigeminal neuralgia (TN) is a chronic neuropathic facial pain disorder that commonly responds to surgery. A proportion of patients, however, do not benefit and suffer ongoing pain. There are currently no imaging tools that permit the prediction of treatment response. To address this paucity, we used diffusion tensor imaging (DTI) to determine whether pre-surgical trigeminal nerve microstructural diffusivities can prognosticate response to TN treatment. In 31 TN patients and 16 healthy controls, multi-tensor tractography was used to extract DTI-derived metrics-axial (AD), radial (RD), mean diffusivity (MD), and fractional anisotropy (FA)-from the cisternal segment, root entry zone and pontine segment of trigeminal nerves for false discovery rate-corrected Student's t -tests. Ipsilateral diffusivities were bootstrap resampled to visualize group-level diffusivity thresholds of long-term response. To obtain an individual-level statistical classifier of surgical response, we conducted discriminant function analysis (DFA) with the type of surgery chosen alongside ipsilateral measurements and ipsilateral/contralateral ratios of AD and RD from all regions of interest as prediction variables. Abnormal diffusivity in the trigeminal pontine fibers, demonstrated by increased AD, highlighted non-responders (n = 14) compared to controls. Bootstrap resampling revealed three ipsilateral diffusivity thresholds of response-pontine AD, MD, cisternal FA-separating 85% of non-responders from responders. DFA produced an 83.9% (71.0% using leave-one-out-cross-validation) accurate prognosticator of response that successfully identified 12/14 non-responders. Our study demonstrates that pre-surgical DTI metrics can serve as a highly predictive, individualized tool to prognosticate surgical response. We further highlight abnormal pontine segment diffusivities as key features of treatment non-response and confirm the axiom that central pain does not commonly benefit from peripheral treatments.

  6. EMG biofeedback: the effects of CRF, FR, VR, FI, and VI schedules of reinforcement on the acquisition and extinction of increases in forearm muscle tension.

    PubMed

    Cohen, S L; Richardson, J; Klebez, J; Febbo, S; Tucker, D

    2001-09-01

    Biofeedback was used to increase forearm-muscle tension. Feedback was delivered under continuous reinforcement (CRF), variable interval (VI), fixed interval (FI), variable ratio (VR), and fixed ratio (FR) schedules of reinforcement when college students increased their muscle tension (electromyograph, EMG) above a high threshold. There were three daily sessions of feedback, and Session 3 was immediately followed by a session without feedback (extinction). The CRF schedule resulted in the highest EMG, closely followed by the FR and VR schedules, and the lowest EMG scores were produced by the FI and VI schedules. Similarly, the CRF schedule resulted in the greatest amount of time-above-threshold and the VI and FI schedules produced the lowest time-above-threshold. The highest response rates were generated by the FR schedule, followed by the VR schedule. The CRF schedule produced relatively low response rates, comparable to the rates under the VI and FI schedules. Some of the data are consistent with the partial-reinforcement-extinction effect. The present data suggest that different schedules of feedback should be considered in muscle-strengthening-contexts such as during the rehabilitation of muscles following brain damage or peripheral nervous-system injury.

  7. Compensation for red-green contrast loss in anomalous trichromats

    PubMed Central

    Boehm, A. E.; MacLeod, D. I. A.; Bosten, J. M.

    2014-01-01

    For anomalous trichromats, threshold contrasts for color differences captured by the L and M cones and their anomalous analogs are much higher than for normal trichromats. The greater spectral overlap of the cone sensitivities reduces chromatic contrast both at and above threshold. But above threshold, adaptively nonlinear processing might compensate for the chromatically impoverished photoreceptor inputs. Ratios of sensitivity for threshold variations and for color appearance along the two cardinal axes of MacLeod-Boynton chromaticity space were calculated for three groups: normals (N = 15), deuteranomals (N = 9), and protanomals (N = 5). Using a four-alternative forced choice (4AFC) task, threshold sensitivity was measured in four color-directions along the two cardinal axes. For the same participants, we reconstructed perceptual color spaces for the positions of 25 hues using multidimensional scaling (MDS). From the reconstructed color spaces we extracted “color difference ratios,” defined as ratios for the size of perceived color differences along the L/(L + M) axis relative to those along the S/(L + M) axis, analogous to “sensitivity ratios” extracted from the 4AFC task. In the 4AFC task, sensitivity ratios were 38% of normal for deuteranomals and 19% of normal for protanomals. Yet, in the MDS results, color difference ratios were 86% of normal for deuteranomals and 67% of normal for protanomals. Thus, the contraction along the L/(L + M) axis shown in the perceptual color spaces of anomalous trichromats is far smaller than predicted by their reduced sensitivity, suggesting that an adaptive adjustment of postreceptoral gain may magnify the cone signals of anomalous trichromats to exploit the range of available postreceptoral neural signals. PMID:25413625

  8. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Development and evaluation of a regression-based model to predict cesium-137 concentration ratios for saltwater fish.

    PubMed

    Pinder, John E; Rowan, David J; Smith, Jim T

    2016-02-01

    Data from published studies and World Wide Web sources were combined to develop a regression model to predict (137)Cs concentration ratios for saltwater fish. Predictions were developed from 1) numeric trophic levels computed primarily from random resampling of known food items and 2) K concentrations in the saltwater for 65 samplings from 41 different species from both the Atlantic and Pacific Oceans. A number of different models were initially developed and evaluated for accuracy which was assessed as the ratios of independently measured concentration ratios to those predicted by the model. In contrast to freshwater systems, were K concentrations are highly variable and are an important factor in affecting fish concentration ratios, the less variable K concentrations in saltwater were relatively unimportant in affecting concentration ratios. As a result, the simplest model, which used only trophic level as a predictor, had comparable accuracies to more complex models that also included K concentrations. A test of model accuracy involving comparisons of 56 published concentration ratios from 51 species of marine fish to those predicted by the model indicated that 52 of the predicted concentration ratios were within a factor of 2 of the observed concentration ratios. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Remodeling characteristics and collagen distribution in synthetic mesh materials explanted from human subjects after abdominal wall reconstruction: an analysis of remodeling characteristics by patient risk factors and surgical site classifications

    PubMed Central

    Cavallo, Jaime A.; Roma, Andres A.; Jasielec, Mateusz S.; Ousley, Jenny; Creamer, Jennifer; Pichert, Matthew D.; Baalman, Sara; Frisella, Margaret M.; Matthews, Brent D.

    2014-01-01

    Background The purpose of this study was to evaluate the associations between patient characteristics or surgical site classifications and the histologic remodeling scores of synthetic meshes biopsied from their abdominal wall repair sites in the first attempt to generate a multivariable risk prediction model of non-constructive remodeling. Methods Biopsies of the synthetic meshes were obtained from the abdominal wall repair sites of 51 patients during a subsequent abdominal re-exploration. Biopsies were stained with hematoxylin and eosin, and evaluated according to a semi-quantitative scoring system for remodeling characteristics (cell infiltration, cell types, extracellular matrix deposition, inflammation, fibrous encapsulation, and neovascularization) and a mean composite score (CR). Biopsies were also stained with Sirius Red and Fast Green, and analyzed to determine the collagen I:III ratio. Based on univariate analyses between subject clinical characteristics or surgical site classification and the histologic remodeling scores, cohort variables were selected for multivariable regression models using a threshold p value of ≤0.200. Results The model selection process for the extracellular matrix score yielded two variables: subject age at time of mesh implantation, and mesh classification (c-statistic = 0.842). For CR score, the model selection process yielded two variables: subject age at time of mesh implantation and mesh classification (r2 = 0.464). The model selection process for the collagen III area yielded a model with two variables: subject body mass index at time of mesh explantation and pack-year history (r2 = 0.244). Conclusion Host characteristics and surgical site assessments may predict degree of remodeling for synthetic meshes used to reinforce abdominal wall repair sites. These preliminary results constitute the first steps in generating a risk prediction model that predicts the patients and clinical circumstances for which non-constructive remodeling of an abdominal wall repair site with synthetic mesh reinforcement is most likely to occur. PMID:24442681

  11. A ratiometric threshold for determining presence of cancer during fluorescence-guided surgery.

    PubMed

    Warram, Jason M; de Boer, Esther; Moore, Lindsay S; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L

    2015-07-01

    Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However, a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n = 11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. © 2015 Wiley Periodicals, Inc.

  12. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210

  13. Deriving the species richness distribution of Geotrupinae (Coleoptera: Scarabaeoidea) in Mexico from the overlap of individual model predictions.

    PubMed

    Trotta-Moreu, Nuria; Lobo, Jorge M

    2010-02-01

    Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.

  14. Symmetry breaking, phase separation and anomalous fluctuations in driven granular gas

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Pöschel, Thorsten; Sasorov, Pavel V.; Schwager, Thomas

    2003-03-01

    What is the role of noise, caused by the discrete nature of particles, in granular dynamics? We address this question by considering a simple driven granular system: an ensemble of nearly elastically colliding hard spheres in a rectangular box, driven by a rapidly vibrating side wall at zero gravity. The elementary state of this system is a strip of enhanced particle density away from the driving wall. Granular hydrodynamics (GHD) predicts a symmetry breaking instability of this state, when the aspect ratio of the confining box exceeds a threshold value, while the average density of the gas is within a ``spinodal interval". At large aspect ratios this instability leads to phase separation similar to that in van der Waals gas. In the present work (see cond-mat/0208286) we focus on the system behavior around the threshold of the symmetry-breaking instability. We put GHD into a quantitative test by performing extensive event-driven molecular dynamic simulations in 2D. Please watch the movies of the simulations at http://summa.physik.hu-berlin.de/ kies/HD/. We found that the supercritical bifurcation curve, predicted by GHD, agrees with the simulations well below and well above the instability threshold. In a wide region of aspect ratios around the threshold the system is dominated by fluctuations. We checked that the fluctuation strength goes down when the number of particles increases. However, fluctuations remain strong (and the critical region wide) even for as many as 4 ot 10^4 particles. We conclude by suggesting that fluctuations may put a severe limitation on the validity of continuum theories of granular flow in systems with a moderately large number of particles.

  15. N-terminal pro-B-type Natriuretic Peptides' Prognostic Utility Is Overestimated in Meta-analyses Using Study-specific Optimal Diagnostic Thresholds.

    PubMed

    Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N

    2015-08-01

    N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.

  16. Chemical characterization of the acid alteration of diesel fuel: Non-targeted analysis by two-dimensional gas chromatography coupled with time-of-flight mass spectrometry with tile-based Fisher ratio and combinatorial threshold determination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Brendon A.; Pinkerton, David K.; Wright, Bob W.

    The illicit chemical alteration of petroleum fuels is of scientific interest, particularly to regulatory agencies which set fuel specifications, or excises based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC–TOFMS) is ideally suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of manymore » samples. The tile-based Fisher-ratio (F-ratio) method reduces the abundance of data in a GC × GC–TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from different filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features which are likely to be robust to the variation present between fuel samples and which will consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3.« less

  17. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  18. Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-12-23

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.

  19. Patient No-Show Predictive Model Development using Multiple Data Sources for an Effective Overbooking Approach

    PubMed Central

    Hanauer, D.A.

    2014-01-01

    Summary Background Patient no-shows in outpatient delivery systems remain problematic. The negative impacts include underutilized medical resources, increased healthcare costs, decreased access to care, and reduced clinic efficiency and provider productivity. Objective To develop an evidence-based predictive model for patient no-shows, and thus improve overbooking approaches in outpatient settings to reduce the negative impact of no-shows. Methods Ten years of retrospective data were extracted from a scheduling system and an electronic health record system from a single general pediatrics clinic, consisting of 7,988 distinct patients and 104,799 visits along with variables regarding appointment characteristics, patient demographics, and insurance information. Descriptive statistics were used to explore the impact of variables on show or no-show status. Logistic regression was used to develop a no-show predictive model, which was then used to construct an algorithm to determine the no-show threshold that calculates a predicted show/no-show status. This approach aims to overbook an appointment where a scheduled patient is predicted to be a no-show. The approach was compared with two commonly-used overbooking approaches to demonstrate the effectiveness in terms of patient wait time, physician idle time, overtime and total cost. Results From the training dataset, the optimal error rate is 10.6% with a no-show threshold being 0.74. This threshold successfully predicts the validation dataset with an error rate of 13.9%. The proposed overbooking approach demonstrated a significant reduction of at least 6% on patient waiting, 27% on overtime, and 3% on total costs compared to other common flat-overbooking methods. Conclusions This paper demonstrates an alternative way to accommodate overbooking, accounting for the prediction of an individual patient’s show/no-show status. The predictive no-show model leads to a dynamic overbooking policy that could improve patient waiting, overtime, and total costs in a clinic day while maintaining a full scheduling capacity. PMID:25298821

  20. Resonance Scattering of Fe XVII X-ray and EUV Lines

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Saba, J. L. R.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Over the years a number of calculations have been carried out to derive intensities of various X-ray and EUV lines in Fe XVII to compare with observed spectra. The predicted intensities have not agreed with solar observations, particularly for the line at 1.5.02 Angstroms; resonance scattering has been suggested as the source for much of the disagreement. The atomic data calculated earlier used seven configurations having n=3 orbitals and the scattering calculations were carried out only for incident energies above the threshold of the highest fine-structure level. These calculations have now been extended to thirteen configurations having n=4 orbitals and the scattering calculations are carried out below as well as above the threshold of the highest fine structure level. These improved calculations of Fe XVII change the intensity ratios compared to those obtained earlier, bringing the optically thin F(15.02)/F(16.78) ratio and several other ratios closer to the observed values. However, some disagreement with the solar observations still persists, even thought the agreement of the presently calculated optically thin F(15.02)/F(15.26) ratio with the experimental results of Brown et al. (1998) and Laming et al. (2000) has improved. Some of the remaining discrepancy is still thought to be the effect of opacity, which is consistent with expected physical conditions for solar sources. EUV intensity ratios are also calculated and compared with observations. Level populations and intensity ratios are calculated, as a function of column density of Fe XVII, in the slab and cylindrical geometries. As found previously, the predicted intensities for the resonance lines at 15.02 and 15.26 Angstroms exhibit initial increases in flux relative to the forbidden line at 17.10 Angstroms and the resonance line at 16.78 Angstroms as optical thickness increases. The same behavior is predicted for the lines at 12.262 and 12.122 Angstroms. Predicted intensities for some of the allowed EUV lines are also affected by opacity.

  1. A method for managing re-identification risk from small geographic areas in Canada

    PubMed Central

    2010-01-01

    Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers) approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs) by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes). Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%), depending on the mitigating controls that the data recipients have in place, the potential invasion of privacy if the data is disclosed, and the motives and capacity of the data recipient to re-identify the data. Conclusion The models we developed can be used to manage the re-identification risk from small geographic areas. Being able to choose among three possible thresholds, a data custodian can adjust the definition of "small geographic area" to the nature of the data and recipient. PMID:20361870

  2. A Novel Approach to Prediction of Mild Obstructive Sleep Disordered Breathing in a Population-Based Sample: The Sleep Heart Health Study

    PubMed Central

    Caffo, Brian; Diener-West, Marie; Punjabi, Naresh M.; Samet, Jonathan

    2010-01-01

    This manuscript considers a data-mining approach for the prediction of mild obstructive sleep disordered breathing, defined as an elevated respiratory disturbance index (RDI), in 5,530 participants in a community-based study, the Sleep Heart Health Study. The prediction algorithm was built using modern ensemble learning algorithms, boosting in specific, which allowed for assessing potential high-dimensional interactions between predictor variables or classifiers. To evaluate the performance of the algorithm, the data were split into training and validation sets for varying thresholds for predicting the probability of a high RDI (≥ 7 events per hour in the given results). Based on a moderate classification threshold from the boosting algorithm, the estimated post-test odds of a high RDI were 2.20 times higher than the pre-test odds given a positive test, while the corresponding post-test odds were decreased by 52% given a negative test (sensitivity and specificity of 0.66 and 0.70, respectively). In rank order, the following variables had the largest impact on prediction performance: neck circumference, body mass index, age, snoring frequency, waist circumference, and snoring loudness. Citation: Caffo B; Diener-West M; Punjabi NM; Samet J. A novel approach to prediction of mild obstructive sleep disordered breathing in a population-based sample: the Sleep Heart Health Study. SLEEP 2010;33(12):1641-1648. PMID:21120126

  3. Physiological Correlates of Endurance Time Variability during Constant-Workrate Cycling Exercise in Patients with COPD

    PubMed Central

    Vivodtzev, Isabelle; Gagnon, Philippe; Pepin, Véronique; Saey, Didier; Laviolette, Louis; Brouillard, Cynthia; Maltais, François

    2011-01-01

    Rationale The endurance time (Tend) during constant-workrate cycling exercise (CET) is highly variable in COPD. We investigated pulmonary and physiological variables that may contribute to these variations in Tend. Methods Ninety-two patients with COPD completed a CET performed at 80% of peak workrate capacity (Wpeak). Patients were divided into tertiles of Tend [Group 1: <4 min; Group 2: 4–6 min; Group 3: >6 min]. Disease severity (FEV1), aerobic fitness (Wpeak, peak oxygen consumption [ peak], ventilatory threshold [ VT]), quadriceps strength (MVC), symptom scores at the end of CET and exercise intensity during CET (heart rate at the end of CET to heart rate at peak incremental exercise ratio [HRCET/HRpeak]) were analyzed as potential variables influencing Tend. Results Wpeak, peak, VT, MVC, leg fatigue at end of CET, and HRCET/HRpeak were lower in group 1 than in group 2 or 3 (p≤0.05). VT and leg fatigue at end of CET independently predicted Tend in multiple regression analysis (r = 0.50, p = 0.001). Conclusion Tend was independently related to the aerobic fitness and to tolerance to leg fatigue at the end of exercise. A large fraction of the variability in Tend was not explained by the physiological parameters assessed in the present study. Individualization of exercise intensity during CET should help in reducing variations in Tend among patients with COPD. PMID:21386991

  4. Effect of patient age on blood product transfusion after cardiac surgery.

    PubMed

    Ad, Niv; Massimiano, Paul S; Burton, Nelson A; Halpin, Linda; Pritchard, Graciela; Shuman, Deborah J; Holmes, Sari D

    2015-07-01

    Blood product transfusion after cardiac surgery is associated with increased morbidity and mortality. Transfusion thresholds are often lower for the elderly, despite the lack of clinical evidence for this practice. This study examined the role of age as a predictor for blood transfusion. A total of 1898 patients were identified who had nonemergent cardiac surgery, between January 2007 and August 2013, without intra-aortic balloon pumps or reoperations, and with short (<24 hours) intensive care unit stays (age ≥75 years; n = 239). Patients age ≥75 years were propensity-score matched to those age <75 years to balance covariates, resulting in 222 patients per group. Analyses of the matched sample examined age as a continuous variable, scaled in 5-year increments. After matching, covariates were balanced between older and younger patients. Older age significantly predicted postoperative (odds ratio = 1.39, P = .028), but not intraoperative (odds ratio = 0.96, P = .559), blood transfusion. Older age predicted longer length of stay (B = 0.21, P < .001), even after adjustment for blood product transfusion (B = 0.20, P < .001). As expected, older age was a significant predictor for poorer survival, even with multivariate adjustment (hazard ratio = 1.34, P = .042). In patients with a routine postoperative course, older age was associated with more postoperative blood transfusion. Older age was also predictive of longer length of stay and poorer survival, even after accounting for clinical factors. Continued study into effects of transfusion, particularly in the elderly, should be directed toward hospital transfusion protocols to optimize perioperative care. Copyright © 2015 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  5. Mechanosensing of stem bending and its interspecific variability in five neotropical rainforest species.

    PubMed

    Coutand, Catherine; Chevolot, Malia; Lacointe, André; Rowe, Nick; Scotti, Ivan

    2010-02-01

    In rain forests, sapling survival is highly dependent on the regulation of trunk slenderness (height/diameter ratio): shade-intolerant species have to grow in height as fast as possible to reach the canopy but also have to withstand mechanical loadings (wind and their own weight) to avoid buckling. Recent studies suggest that mechanosensing is essential to control tree dimensions and stability-related morphogenesis. Differences in species slenderness have been observed among rainforest trees; the present study thus investigates whether species with different slenderness and growth habits exhibit differences in mechanosensitivity. Recent studies have led to a model of mechanosensing (sum-of-strains model) that predicts a quantitative relationship between the applied sum of longitudinal strains and the plant's responses in the case of a single bending. Saplings of five different neotropical species (Eperua falcata, E. grandiflora, Tachigali melinonii, Symphonia globulifera and Bauhinia guianensis) were subjected to a regimen of controlled mechanical loading phases (bending) alternating with still phases over a period of 2 months. Mechanical loading was controlled in terms of strains and the five species were subjected to the same range of sum of strains. The application of the sum-of-strain model led to a dose-response curve for each species. Dose-response curves were then compared between tested species. The model of mechanosensing (sum-of-strain model) applied in the case of multiple bending as long as the bending frequency was low. A comparison of dose-response curves for each species demonstrated differences in the stimulus threshold, suggesting two groups of responses among the species. Interestingly, the liana species B. guianensis exhibited a higher threshold than other Leguminosae species tested. This study provides a conceptual framework to study variability in plant mechanosensing and demonstrated interspecific variability in mechanosensing.

  6. Tourism development and economic growth a nonlinear approach

    NASA Astrophysics Data System (ADS)

    Po, Wan-Chen; Huang, Bwo-Nung

    2008-09-01

    We use cross sectional data (1995-2005 yearly averages) for 88 countries to investigate the nonlinear relationship between tourism development and economic growth when a threshold variable is used. The degree of tourism specialization ( qi, defined as receipts from international tourism as a percentage of GDP) is used as the threshold variable. The results of the tests for nonlinearity indicate that the 88 countries’ data should be separated into three different groups or regimes to analyze the tourism-growth nexus. The results of the threshold regression show that when the qi is below 4.0488% (regime 1, 57 countries) or above 4.7337% (regime 3, 23 countries), there exists a significantly positive relationship between tourism growth and economic growth. However, when the qi is above 4.0488% and below 4.7337% (regime 2, 8 countries), we are unable to find evidence of such a significant relationship. Further in-depth analysis reveals that relatively low ratios of the value added of the service industry to GDP, and the forested area per country area are able to explain why we are unable to find a significant relationship between these two variables in regime 2’s countries.

  7. Re-assess Vector Indices Threshold as an Early Warning Tool for Predicting Dengue Epidemic in a Dengue Non-endemic Country

    PubMed Central

    Hsu, Pi-Shan; Chen, Chaur-Dong; Lian, Ie-Bin; Chao, Day-Yu

    2015-01-01

    Background Despite dengue dynamics being driven by complex interactions between human hosts, mosquito vectors and viruses that are influenced by climate factors, an operational model that will enable health authorities to anticipate the outbreak risk in a dengue non-endemic area has not been developed. The objectives of this study were to evaluate the temporal relationship between meteorological variables, entomological surveillance indices and confirmed dengue cases; and to establish the threshold for entomological surveillance indices including three mosquito larval indices [Breteau (BI), Container (CI) and House indices (HI)] and one adult index (AI) as an early warning tool for dengue epidemic. Methodology/Principal Findings Epidemiological, entomological and meteorological data were analyzed from 2005 to 2012 in Kaohsiung City, Taiwan. The successive waves of dengue outbreaks with different magnitudes were recorded in Kaohsiung City, and involved a dominant serotype during each epidemic. The annual indigenous dengue cases usually started from May to June and reached a peak in October to November. Vector data from 2005–2012 showed that the peak of the adult mosquito population was followed by a peak in the corresponding dengue activity with a lag period of 1–2 months. Therefore, we focused the analysis on the data from May to December and the high risk district, where the inspection of the immature and mature mosquitoes was carried out on a weekly basis and about 97.9% dengue cases occurred. The two-stage model was utilized here to estimate the risk and time-lag effect of annual dengue outbreaks in Taiwan. First, Poisson regression was used to select the optimal subset of variables and time-lags for predicting the number of dengue cases, and the final results of the multivariate analysis were selected based on the smallest AIC value. Next, each vector index models with selected variables were subjected to multiple logistic regression models to examine the accuracy of predicting the occurrence of dengue cases. The results suggested that Model-AI, BI, CI and HI predicted the occurrence of dengue cases with 83.8, 87.8, 88.3 and 88.4% accuracy, respectively. The predicting threshold based on individual Model-AI, BI, CI and HI was 0.97, 1.16, 1.79 and 0.997, respectively. Conclusion/Significance There was little evidence of quantifiable association among vector indices, meteorological factors and dengue transmission that could reliably be used for outbreak prediction. Our study here provided the proof-of-concept of how to search for the optimal model and determine the threshold for dengue epidemics. Since those factors used for prediction varied, depending on the ecology and herd immunity level under different geological areas, different thresholds may be developed for different countries using a similar structure of the two-stage model. PMID:26366874

  8. Mathematical Model Relating Uniaxial Compressive Behavior of Manufactured Sand Mortar to MIP-Derived Pore Structure Parameters

    PubMed Central

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed. PMID:25133257

  9. Mathematical model relating uniaxial compressive behavior of manufactured sand mortar to MIP-derived pore structure parameters.

    PubMed

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed.

  10. Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies

    NASA Astrophysics Data System (ADS)

    Perez Hoyos, Isabel Cristina

    The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.

  11. The fragmentation threshold and implications for explosive eruptions

    NASA Astrophysics Data System (ADS)

    Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.

    2003-04-01

    The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.

  12. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  13. A comparison of South Asian specific and established BMI thresholds for determining obesity prevalence in pregnancy and predicting pregnancy complications: findings from the Born in Bradford cohort.

    PubMed

    Bryant, M; Santorelli, G; Lawlor, D A; Farrar, D; Tuffnell, D; Bhopal, R; Wright, J

    2014-03-01

    To describe how maternal obesity prevalence varies by established international and South Asian specific body mass index (BMI) cut-offs in women of Pakistani origin and investigate whether different BMI thresholds can help to identify women at risk of adverse pregnancy and birth outcomes. Prospective bi-ethnic birth cohort study (the Born in Bradford (BiB) cohort). Bradford, a deprived city in the North of the UK. A total of 8478 South Asian and White British pregnant women participated in the BiB cohort study. Maternal obesity prevalence; prevalence of known obesity-related adverse pregnancy outcomes: mode of birth, hypertensive disorders of pregnancy (HDP), gestational diabetes, macrosomia and pre-term births. Application of South Asian BMI cut-offs increased prevalence of obesity in Pakistani women from 18.8 (95% confidence interval (CI) 17.6-19.9) to 30.9% (95% CI 29.5-32.2). With the exception of pre-term births, there was a positive linear relationship between BMI and prevalence of adverse pregnancy and birth outcomes, across almost the whole BMI distribution. Risk of gestational diabetes and HDP increased more sharply in Pakistani women after a BMI threshold of at least 30 kg m(-2), but there was no evidence of a sharp increase in any risk factors at the new, lower thresholds suggested for use in South Asian women. BMI was a good single predictor of outcomes (area under the receiver operating curve: 0.596-0.685 for different outcomes); prediction was more discriminatory and accurate with BMI as a continuous variable than as a binary variable for any possible cut-off point. Applying the new South Asian threshold to pregnant women would markedly increase those who were referred for monitoring and lifestyle advice. However, our results suggest that lowering the BMI threshold in South Asian women would not improve the predictive ability for identifying those who were at risk of adverse pregnancy outcomes.

  14. A Ratiometric Threshold for Determining Presence of Cancer During Fluorescence-guided Surgery

    PubMed Central

    Warram, Jason M; de Boer, Esther; Moore, Lindsay S.; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L

    2015-01-01

    Background&Objective Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Methods Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n=11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. Results During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Conclusion Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. PMID:26074273

  15. Urinary Squamous Epithelial Cells Do Not Accurately Predict Urine Culture Contamination, but May Predict Urinalysis Performance in Predicting Bacteriuria.

    PubMed

    Mohr, Nicholas M; Harland, Karisa K; Crabb, Victoria; Mutnick, Rachel; Baumgartner, David; Spinosi, Stephanie; Haarstad, Michael; Ahmed, Azeemuddin; Schweizer, Marin; Faine, Brett

    2016-03-01

    The presence of squamous epithelial cells (SECs) has been advocated to identify urinary contamination despite a paucity of evidence supporting this practice. We sought to determine the value of using quantitative SECs as a predictor of urinalysis contamination. Retrospective cross-sectional study of adults (≥18 years old) presenting to a tertiary academic medical center who had urinalysis with microscopy and urine culture performed. Patients with missing or implausible demographic data were excluded (2.5% of total sample). The primary analysis aimed to determine an SEC threshold that predicted urine culture contamination using receiver operating characteristics (ROC) curve analysis. The a priori secondary analysis explored how demographic variables (age, sex, body mass index) may modify the SEC test performance and whether SECs impacted traditional urinalysis indicators of bacteriuria. A total of 19,328 records were included. ROC curve analysis demonstrated that SEC count was a poor predictor of urine culture contamination (area under the ROC curve = 0.680, 95% confidence interval [CI] = 0.671 to 0.689). In secondary analysis, the positive likelihood ratio (LR+) of predicting bacteriuria via urinalysis among noncontaminated specimens was 4.98 (95% CI = 4.59 to 5.40) in the absence of SECs, but the LR+ fell to 2.35 (95% CI = 2.17 to 2.54) for samples with more than 8 SECs/low-powered field (lpf). In an independent validation cohort, urinalysis samples with fewer than 8 SECs/lpf predicted bacteriuria better (sensitivity = 75%, specificity = 84%) than samples with more than 8 SECs/lpf (sensitivity = 86%, specificity = 70%; diagnostic odds ratio = 17.5 [14.9 to 20.7] vs. 8.7 [7.3 to 10.5]). Squamous epithelial cells are a poor predictor of urine culture contamination, but may predict poor predictive performance of traditional urinalysis measures. © 2016 by the Society for Academic Emergency Medicine.

  16. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  17. Chemical characterization of the acid alteration of diesel fuel: Non-targeted analysis by two-dimensional gas chromatography coupled with time-of-flight mass spectrometry with tile-based Fisher ratio and combinatorial threshold determination.

    PubMed

    Parsons, Brendon A; Pinkerton, David K; Wright, Bob W; Synovec, Robert E

    2016-04-01

    The illicit chemical alteration of petroleum fuels is of keen interest, particularly to regulatory agencies that set fuel specifications, or taxes/credits based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS) is well suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of many samples. Tile-based Fisher-ratio (F-ratio) analysis reduces the abundance of data in a GC×GC-TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from differently branded filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features likely to be robust to the variation present between fuel samples and may consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3. Validation of the F-ratio analysis was performed using an additional three fuels. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  19. Effects of oxygen on responses to heating in two lizard species sampled along an elevational gradient.

    PubMed

    DuBois, P Mason; Shea, Tanner K; Claunch, Natalie M; Taylor, Emily N

    2017-08-01

    Thermal tolerance is an important variable in predictive models about the effects of global climate change on species distributions, yet the physiological mechanisms responsible for reduced performance at high temperatures in air-breathing vertebrates are not clear. We conducted an experiment to examine how oxygen affects three variables exhibited by ectotherms as they heat-gaping threshold, panting threshold, and loss of righting response (the latter indicating the critical thermal maximum)-in two lizard species along an elevational (and therefore environmental oxygen partial pressure) gradient. Oxygen partial pressure did not impact these variables in either species. We also exposed lizards at each elevation to severely hypoxic gas to evaluate their responses to hypoxia. Severely low oxygen partial pressure treatments significantly reduced the gaping threshold, panting threshold, and critical thermal maximum. Further, under these extreme hypoxic conditions, these variables were strongly and positively related to partial pressure of oxygen. In an elevation where both species overlapped, the thermal tolerance of the high elevation species was less affected by hypoxia than that of the low elevation species, suggesting the high elevation species may be adapted to lower oxygen partial pressures. In the high elevation species, female lizards had higher thermal tolerance than males. Our data suggest that oxygen impacts the thermal tolerance of lizards, but only under severely hypoxic conditions, possibly as a result of hypoxia-induced anapyrexia. Copyright © 2017. Published by Elsevier Ltd.

  20. Neurofeedback in three patients in the state of unresponsive wakefulness.

    PubMed

    Keller, Ingo; Garbacenkaite, Ruta

    2015-12-01

    Some severely brain injured patients remain unresponsive, only showing reflex movements without any response to command. This syndrome has been named unresponsive wakefulness syndrome (UWS). The objective of the present study was to determine whether UWS patients are able to alter their brain activity using neurofeedback (NFB) technique. A small sample of three patients received a daily session of NFB for 3 weeks. We applied the ratio of theta and beta amplitudes as a feedback variable. Using an automatic threshold function, patients heard their favourite music whenever their theta/beta ratio dropped below the threshold. Changes in awareness were assessed weekly with the JFK Coma Recovery Scale-Revised for each treatment week, as well as 3 weeks before and after NFB. Two patients showed a decrease in their theta/beta ratio and theta-amplitudes during this period. The third patient showed no systematic changes in his EEG activity. The results of our study provide the first evidence that NFB can be used in patients in a state of unresponsive wakefulness.

  1. Predictive Variables of Half-Marathon Performance for Male Runners.

    PubMed

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A; García-López, Juan

    2017-06-01

    The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO 2max , speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance.

  2. An analytical method for predicting postwildfire peak discharges

    USGS Publications Warehouse

    Moody, John A.

    2012-01-01

    An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The threshold value was about 7.6 millimeters per hour for the year of the wildfire and the first year after the wildfire, and it was about 11.1 millimeters per hour for the second year after the wildfire.

  3. How to Assess the Value of Medicines?

    PubMed Central

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066

  4. How to assess the value of medicines?

    PubMed

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.

  5. High Mid-Flow to Vital Capacity Ratio and the Response to Exercise in Children With Congenital Heart Disease.

    PubMed

    Vilozni, Daphna; Alcaneses-Ofek, Maria Rosario; Reuveny, Ronen; Rosenblum, Omer; Inbar, Omri; Katz, Uriel; Ziv-Baran, Tomer; Dubnov-Raz, Gal

    2016-12-01

    Pulmonary mechanics may play a role in exercise intolerance in patients with congenital heart disease (CHD). A reduced FVC volume could increase the ratio between mid-flow (FEF 25-75% ) and FVC, which is termed high dysanapsis. The relationship between high dysanapsis and the response to maximum-intensity exercise in children with CHD had not yet been studied. The aim of this work was to examine whether high dysanapsis is related to the cardiopulmonary response to maximum-intensity exercise in pediatric subjects with CHD. We retrospectively collected data from 42 children and adolescents with CHD who had either high dysanapsis (ratio >1.2; n = 21) or normal dysanapsis (control) (n = 21) as measured by spirometry. Data extracted from cardiopulmonary exercise test reports included peak values of heart rate, work load, V̇ O 2 , V̇ CO 2 , and ventilation parameters and submaximum values, including ventilatory threshold and ventilatory equivalents. There were no significant differences in demographic and clinical parameters between the groups. Participants with high dysanapsis differed from controls in lower median peak oxygen consumption (65.8% vs 83.0% of predicted, P = .02), peak oxygen pulse (78.6% vs 87.8% of predicted, P = .02), ventilatory threshold (73.8% vs 85.3% of predicted, P = .03), and maximum breathing frequency (106% vs 121% of predicted, P = .035). In the high dysanapsis group only, median peak ventilation and tidal volume were significantly lower than 80% of predicted values. In children and adolescents with corrected CHD, high dysanapsis was associated with a lower ventilatory capacity and reduced aerobic fitness, which may indicate respiratory muscle impairments. Copyright © 2016 by Daedalus Enterprises.

  6. Prediction of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.

    PubMed

    Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim

    2015-08-01

    Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.

  7. Diagnostic accuracy of spot urinary protein and albumin to creatinine ratios for detection of significant proteinuria or adverse pregnancy outcome in patients with suspected pre-eclampsia: systematic review and meta-analysis

    PubMed Central

    Morris, R K; Riley, R D; Doug, M; Deeks, J J

    2012-01-01

    Objective To determine the diagnostic accuracy of two “spot urine” tests for significant proteinuria or adverse pregnancy outcome in pregnant women with suspected pre-eclampsia. Design Systematic review and meta-analysis. Data sources Searches of electronic databases 1980 to January 2011, reference list checking, hand searching of journals, and contact with experts. Inclusion criteria Diagnostic studies, in pregnant women with hypertension, that compared the urinary spot protein to creatinine ratio or albumin to creatinine ratio with urinary protein excretion over 24 hours or adverse pregnancy outcome. Study characteristics, design, and methodological and reporting quality were objectively assessed. Data extraction Study results relating to diagnostic accuracy were extracted and synthesised using multivariate random effects meta-analysis methods. Results Twenty studies, testing 2978 women (pregnancies), were included. Thirteen studies examining protein to creatinine ratio for the detection of significant proteinuria were included in the multivariate analysis. Threshold values for protein to creatinine ratio ranged between 0.13 and 0.5, with estimates of sensitivity ranging from 0.65 to 0.89 and estimates of specificity from 0.63 to 0.87; the area under the summary receiver operating characteristics curve was 0.69. On average, across all studies, the optimum threshold (that optimises sensitivity and specificity combined) seems to be between 0.30 and 0.35 inclusive. However, no threshold gave a summary estimate above 80% for both sensitivity and specificity, and considerable heterogeneity existed in diagnostic accuracy across studies at most thresholds. No studies looked at protein to creatinine ratio and adverse pregnancy outcome. For albumin to creatinine ratio, meta-analysis was not possible. Results from a single study suggested that the most predictive result, for significant proteinuria, was with the DCA 2000 quantitative analyser (>2 mg/mmol) with a summary sensitivity of 0.94 (95% confidence interval 0.86 to 0.98) and a specificity of 0.94 (0.87 to 0.98). In a single study of adverse pregnancy outcome, results for perinatal death were a sensitivity of 0.82 (0.48 to 0.98) and a specificity of 0.59 (0.51 to 0.67). Conclusion The maternal “spot urine” estimate of protein to creatinine ratio shows promising diagnostic value for significant proteinuria in suspected pre-eclampsia. The existing evidence is not, however, sufficient to determine how protein to creatinine ratio should be used in clinical practice, owing to the heterogeneity in test accuracy and prevalence across studies. Insufficient evidence is available on the use of albumin to creatinine ratio in this area. Insufficient evidence exists for either test to predict adverse pregnancy outcome. PMID:22777026

  8. Dose-response relationships for the onset of avoidance of sonar by free-ranging killer whales.

    PubMed

    Miller, Patrick J O; Antunes, Ricardo N; Wensveen, Paul J; Samarra, Filipa I P; Alves, Ana Catarina; Tyack, Peter L; Kvadsheim, Petter H; Kleivane, Lars; Lam, Frans-Peter A; Ainslie, Michael A; Thomas, Len

    2014-02-01

    Eight experimentally controlled exposures to 1-2 kHz or 6-7 kHz sonar signals were conducted with four killer whale groups. The source level and proximity of the source were increased during each exposure in order to reveal response thresholds. Detailed inspection of movements during each exposure session revealed sustained changes in speed and travel direction judged to be avoidance responses during six of eight sessions. Following methods developed for Phase-I clinical trials in human medicine, response thresholds ranging from 94 to 164 dB re 1 μPa received sound pressure level (SPL) were fitted to Bayesian dose-response functions. Thresholds did not consistently differ by sonar frequency or whether a group had previously been exposed, with a mean SPL response threshold of 142 ± 15 dB (mean ± s.d.). High levels of between- and within-individual variability were identified, indicating that thresholds depended upon other undefined contextual variables. The dose-response functions indicate that some killer whales started to avoid sonar at received SPL below thresholds assumed by the U.S. Navy. The predicted extent of habitat over which avoidance reactions occur depends upon whether whales responded to proximity or received SPL of the sonar or both, but was large enough to raise concerns about biological consequences to the whales.

  9. Predicting Vasovagal Syncope from Heart Rate and Blood Pressure: A Prospective Study in 140 Subjects.

    PubMed

    Virag, Nathalie; Erickson, Mark; Taraborrelli, Patricia; Vetter, Rolf; Lim, Phang Boon; Sutton, Richard

    2018-04-28

    We developed a vasovagal syncope (VVS) prediction algorithm for use during head-up tilt with simultaneous analysis of heart rate (HR) and systolic blood pressure (SBP). We previously tested this algorithm retrospectively in 1155 subjects, showing sensitivity 95%, specificity 93% and median prediction time of 59s. This study was prospective, single center, on 140 subjects to evaluate this VVS prediction algorithm and assess if retrospective results were reproduced and clinically relevant. Primary endpoint was VVS prediction: sensitivity and specificity >80%. In subjects, referred for 60° head-up tilt (Italian protocol), non-invasive HR and SBP were supplied to the VVS prediction algorithm: simultaneous analysis of RR intervals, SBP trends and their variability represented by low-frequency power generated cumulative risk which was compared with a predetermined VVS risk threshold. When cumulative risk exceeded threshold, an alert was generated. Prediction time was duration between first alert and syncope. Of 140 subjects enrolled, data was usable for 134. Of 83 tilt+ve (61.9%), 81 VVS events were correctly predicted and of 51 tilt-ve subjects (38.1%), 45 were correctly identified as negative by the algorithm. Resulting algorithm performance was sensitivity 97.6%, specificity 88.2%, meeting primary endpoint. Mean VVS prediction time was 2min 26s±3min16s with median 1min 25s. Using only HR and HR variability (without SBP) the mean prediction time reduced to 1min34s±1min45s with median 1min13s. The VVS prediction algorithm, is clinically-relevant tool and could offer applications including providing a patient alarm, shortening tilt-test time, or triggering pacing intervention in implantable devices. Copyright © 2018. Published by Elsevier Inc.

  10. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  11. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  12. Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations

    PubMed Central

    Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi

    2014-01-01

    A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610

  13. Coupled soil respiration and transpiration dynamics from tree-scale to catchment scale in dry Rocky Mountain pine forests and the role of snowpack

    NASA Astrophysics Data System (ADS)

    Berryman, E.; Barnard, H. R.; Brooks, P. D.; Adams, H.; Burns, M. A.; Wilson, W.; Stielstra, C. M.

    2013-12-01

    A current ecohydrological challenge is quantifying the exact nature of carbon (C) and water couplings across landscapes. An emerging framework of understanding places plant physiological processes as a central control over soil respiration, the largest source of CO2 to the atmosphere. In dry montane forests, spatial and temporal variability in forest physiological processes are governed by hydrological patterns. Critical feedbacks involving respiration, moisture supply and tree physiology are poorly understood and must be quantified at the landscape level to better predict carbon cycle implications of regional drought under future climate change. We present data from an experiment designed to capture landscape variability in key coupled hydrological and C processes in forests of Colorado's Front Range. Sites encompass three catchments within the Boulder Creek watershed, range from 1480 m to 3021 m above sea level and are co-located with the DOE Niwot Ridge Ameriflux site and the Boulder Creek Critical Zone Observatory. Key hydrological measurements (soil moisture, transpiration) are coupled with soil respiration measurements within each catchment at different landscape positions. This three-dimensional study design also allows for the examination of the role of water subsidies from uplands to lowlands in controlling respiration. Initial findings from 2012 reveal a moisture threshold response of the sensitivity of soil respiration to temperature. This threshold may derive from tree physiological responses to variation in moisture availability, which in turn is controlled by the persistence of snowpack. Using data collected in 2013, first, we determine whether respiration moisture thresholds represent triggers for transpiration at the individual tree level. Next, using stable isotope ratios of soil respiration and xylem and soil water, we compare the depths of respiration to depths of water uptake to assign tree vs. understory sources of respiration. This will help determine whether tree root-zone respiration exhibits a similar moisture threshold. Lastly, we examine whether moisture thresholds to temperature sensitivity are consistent across a range of snowpack persistence. Findings are compared to data collected from sites in Arizona and New Mexico to better establish the role of winter precipitation in governing growing season respiration rates. The outcome of this study will contribute to a better understanding of linkages among water, tree physiology, and soil respiration with the ultimate goal of scaling plot-level respiration fluxes to entire catchments.

  14. Monitoring and modeling to predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2006-01-01

    The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.

  15. Predicting emergency department volume using forecasting methods to create a "surge response" for noncrisis events.

    PubMed

    Chase, Valerie J; Cohn, Amy E M; Peterson, Timothy A; Lavieri, Mariel S

    2012-05-01

    This study investigated whether emergency department (ED) variables could be used in mathematical models to predict a future surge in ED volume based on recent levels of use of physician capacity. The models may be used to guide decisions related to on-call staffing in non-crisis-related surges of patient volume. A retrospective analysis was conducted using information spanning July 2009 through June 2010 from a large urban teaching hospital with a Level I trauma center. A comparison of significance was used to assess the impact of multiple patient-specific variables on the state of the ED. Physician capacity was modeled based on historical physician treatment capacity and productivity. Binary logistic regression analysis was used to determine the probability that the available physician capacity would be sufficient to treat all patients forecasted to arrive in the next time period. The prediction horizons used were 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 8 hours, and 12 hours. Five consecutive months of patient data from July 2010 through November 2010, similar to the data used to generate the models, was used to validate the models. Positive predictive values, Type I and Type II errors, and real-time accuracy in predicting noncrisis surge events were used to evaluate the forecast accuracy of the models. The ratio of new patients requiring treatment over total physician capacity (termed the care utilization ratio [CUR]) was deemed a robust predictor of the state of the ED (with a CUR greater than 1 indicating that the physician capacity would not be sufficient to treat all patients forecasted to arrive). Prediction intervals of 30 minutes, 8 hours, and 12 hours performed best of all models analyzed, with deviances of 1.000, 0.951, and 0.864, respectively. A 95% significance was used to validate the models against the July 2010 through November 2010 data set. Positive predictive values ranged from 0.738 to 0.872, true positives ranged from 74% to 94%, and true negatives ranged from 70% to 90% depending on the threshold used to determine the state of the ED with the 30-minute prediction model. The CUR is a new and robust indicator of an ED system's performance. The study was able to model the tradeoff of longer time to response versus shorter but more accurate predictions, by investigating different prediction intervals. Current practice would have been improved by using the proposed models and would have identified the surge in patient volume earlier on noncrisis days. © 2012 by the Society for Academic Emergency Medicine.

  16. Predictive Models of the Hydrological Regime of Unregulated Streams in Arizona

    USGS Publications Warehouse

    Anning, David W.; Parker, John T.C.

    2009-01-01

    Three statistical models were developed by the U.S. Geological Survey in cooperation with the Arizona Department of Environmental Quality to improve the predictability of flow occurrence in unregulated streams throughout Arizona. The models can be used to predict the probabilities of the hydrological regime being one of four categories developed by this investigation: perennial, which has streamflow year-round; nearly perennial, which has streamflow 90 to 99.9 percent of the year; weakly perennial, which has streamflow 80 to 90 percent of the year; or nonperennial, which has streamflow less than 80 percent of the year. The models were developed to assist the Arizona Department of Environmental Quality in selecting sites for participation in the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program. One model was developed for each of the three hydrologic provinces in Arizona - the Plateau Uplands, the Central Highlands, and the Basin and Range Lowlands. The models for predicting the hydrological regime were calibrated using statistical methods and explanatory variables of discharge, drainage-area, altitude, and location data for selected U.S. Geological Survey streamflow-gaging stations and a climate index derived from annual precipitation data. Models were calibrated on the basis of streamflow data from 46 stations for the Plateau Uplands province, 82 stations for the Central Highlands province, and 90 stations for the Basin and Range Lowlands province. The models were developed using classification trees that facilitated the analysis of mixed numeric and factor variables. In all three models, a threshold stream discharge was the initial variable to be considered within the classification tree and was the single most important explanatory variable. If a stream discharge value at a station was below the threshold, then the station record was determined as being nonperennial. If, however, the stream discharge was above the threshold, subsequent decisions were made according to the classification tree and explanatory variables to determine the hydrological regime of the reach as being perennial, nearly perennial, weakly perennial, or nonperennial. Using model calibration data, misclassification rates for each model were 17 percent for the Plateau Uplands, 15 percent for the Central Highlands, and 14 percent for the Basin and Range Lowlands models. The actual misclassification rate may be higher; however, the model has not been field verified for a full error assessment. The calibrated models were used to classify stream reaches for which the Arizona Department of Environmental Quality had collected miscellaneous discharge measurements. A total of 5,080 measurements at 696 sites were routed through the appropriate classification tree to predict the hydrological regime of the reaches in which the measurements were made. The predictions resulted in classification of all stream reaches as perennial or nonperennial; no reaches were predicted as nearly perennial or weakly perennial. The percentages of sites predicted as being perennial and nonperennial, respectively, were 77 and 23 for the Plateau Uplands, 87 and 13 for the Central Highlands, and 76 and 24 for the Basin and Range Lowlands.

  17. Development and internal validation of a side-specific, multiparametric magnetic resonance imaging-based nomogram for the prediction of extracapsular extension of prostate cancer.

    PubMed

    Martini, Alberto; Gupta, Akriti; Lewis, Sara C; Cumarasamy, Shivaram; Haines, Kenneth G; Briganti, Alberto; Montorsi, Francesco; Tewari, Ashutosh K

    2018-04-19

    To develop a nomogram for predicting side-specific extracapsular extension (ECE) for planning nerve-sparing radical prostatectomy. We retrospectively analysed data from 561 patients who underwent robot-assisted radical prostatectomy between February 2014 and October 2015. To develop a side-specific predictive model, we considered the prostatic lobes separately. Four variables were included: prostate-specific antigen; highest ipsilateral biopsy Gleason grade; highest ipsilateral percentage core involvement; and ECE on multiparametric magnetic resonance imaging (mpMRI). A multivariable logistic regression analysis was fitted to predict side-specific ECE. A nomogram was built based on the coefficients of the logit function. Internal validation was performed using 'leave-one-out' cross-validation. Calibration was graphically investigated. The decision curve analysis was used to evaluate the net clinical benefit. The study population consisted of 829 side-specific cases, after excluding negative biopsy observations (n = 293). ECE was reported on mpMRI and final pathology in 115 (14%) and 142 (17.1%) cases, respectively. Among these, mpMRI was able to predict ECE correctly in 57 (40.1%) cases. All variables in the model except highest percentage core involvement were predictors of ECE (all P ≤ 0.006). All variables were considered for inclusion in the nomogram. After internal validation, the area under the curve was 82.11%. The model demonstrated excellent calibration and improved clinical risk prediction, especially when compared with relying on mpMRI prediction of ECE alone. When retrospectively applying the nomogram-derived probability, using a 20% threshold for performing nerve-sparing, nine out of 14 positive surgical margins (PSMs) at the site of ECE resulted above the threshold. We developed an easy-to-use model for the prediction of side-specific ECE, and hope it serves as a tool for planning nerve-sparing radical prostatectomy and in the reduction of PSM in future series. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  18. Dynamics of chromatic visual system processing differ in complexity between children and adults.

    PubMed

    Boon, Mei Ying; Suttle, Catherine M; Henry, Bruce I; Dain, Stephen J

    2009-06-30

    Measures of chromatic contrast sensitivity in children are lower than those of adults. This may be related to immaturities in signal processing at or near threshold. We have found that children's VEPs in response to low contrast supra-threshold chromatic stimuli are more intra-individually variable than those recorded from adults. Here, we report on linear and nonlinear analyses of chromatic VEPs recorded from children and adults. Two measures of signal-to-noise ratio are similar between the adults and children, suggesting that relatively high noise is unlikely to account for the poor clarity of negative and positive peak components in the children's VEPs. Nonlinear analysis indicates higher complexity of adults' than children's chromatic VEPs, at levels of chromatic contrast around and well above threshold.

  19. Can the biomass-ratio hypothesis predict mixed-species litter decomposition along a climatic gradient?

    PubMed Central

    Tardif, Antoine; Shipley, Bill; Bloor, Juliette M. G.; Soussana, Jean-François

    2014-01-01

    Background and Aims The biomass-ratio hypothesis states that ecosystem properties are driven by the characteristics of dominant species in the community. In this study, the hypothesis was operationalized as community-weighted means (CWMs) of monoculture values and tested for predicting the decomposition of multispecies litter mixtures along an abiotic gradient in the field. Methods Decomposition rates (mg g−1 d−1) of litter from four herb species were measured using litter-bed experiments with the same soil at three sites in central France along a correlated climatic gradient of temperature and precipitation. All possible combinations from one to four species mixtures were tested over 28 weeks of incubation. Observed mixture decomposition rates were compared with those predicted by the biomass-ratio hypothesis. Variability of the prediction errors was compared with the species richness of the mixtures, across sites, and within sites over time. Key Results Both positive and negative prediction errors occurred. Despite this, the biomass-ratio hypothesis was true as an average claim for all sites (r = 0·91) and for each site separately, except for the climatically intermediate site, which showed mainly synergistic deviations. Variability decreased with increasing species richness and in less favourable climatic conditions for decomposition. Conclusions Community-weighted mean values provided good predictions of mixed-species litter decomposition, converging to the predicted values with increasing species richness and in climates less favourable to decomposition. Under a context of climate change, abiotic variability would be important to take into account when predicting ecosystem processes. PMID:24482152

  20. Development of Thresholds and Exceedance Probabilities for Influent Water Quality to Meet Drinking Water Regulations

    NASA Astrophysics Data System (ADS)

    Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.

    2017-12-01

    Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.

  1. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  2. Novel pre-therapeutic scoring system using patient and haematological data to predict facial palsy prognosis.

    PubMed

    Wasano, K; Ishikawa, T; Kawasaki, T; Yamamoto, S; Tomisato, S; Shinden, S; Minami, S; Wakabayashi, T; Ogawa, K

    2017-12-01

    We describe a novel scoring system, the facial Palsy Prognosis Prediction score (PPP score), which we test for reliability in predicting pre-therapeutic prognosis of facial palsy. We aimed to use readily available patient data that all clinicians have access to before starting treatment. Multicenter case series with chart review. Three tertiary care hospitals. We obtained haematological and demographic data from 468 facial palsy patients who were treated between 2010 and 2014 in three tertiary care hospitals. Patients were categorised as having Bell's palsy or Ramsey Hunt's palsy. We compared the data of recovered and unrecovered patients. PPP scores consisted of combinatorial threshold values of continuous patient data (eg platelet count) and categorical variables (eg gender) that best predicted recovery. We created separate PPP scores for Bell's palsy patients (PPP-B) and for Ramsey Hunt's palsy patients (PPP-H). The PPP-B score included age (≥65 years), gender (male) and neutrophil-to-lymphocyte ratio (≥2.9). The PPP-H score included age (≥50 years), monocyte rate (≥6.0%), mean corpuscular volume (≥95 fl) and platelet count (≤200 000 /μL). Patient recovery rate significantly decreased with increasing PPP scores (both PPP-B and PPP-H) in a step-wise manner. PPP scores (ie PPP-B score and PPP-H score) ≥2 were associated with worse than average prognosis. Palsy Prognosis Prediction scores are useful for predicting prognosis of facial palsy before beginning treatment. © 2017 John Wiley & Sons Ltd.

  3. Prediction of Antibacterial Activity from Physicochemical Properties of Antimicrobial Peptides

    PubMed Central

    Melo, Manuel N.; Ferre, Rafael; Feliu, Lídia; Bardají, Eduard; Planas, Marta; Castanho, Miguel A. R. B.

    2011-01-01

    Consensus is gathering that antimicrobial peptides that exert their antibacterial action at the membrane level must reach a local concentration threshold to become active. Studies of peptide interaction with model membranes do identify such disruptive thresholds but demonstrations of the possible correlation of these with the in vivo onset of activity have only recently been proposed. In addition, such thresholds observed in model membranes occur at local peptide concentrations close to full membrane coverage. In this work we fully develop an interaction model of antimicrobial peptides with biological membranes; by exploring the consequences of the underlying partition formalism we arrive at a relationship that provides antibacterial activity prediction from two biophysical parameters: the affinity of the peptide to the membrane and the critical bound peptide to lipid ratio. A straightforward and robust method to implement this relationship, with potential application to high-throughput screening approaches, is presented and tested. In addition, disruptive thresholds in model membranes and the onset of antibacterial peptide activity are shown to occur over the same range of locally bound peptide concentrations (10 to 100 mM), which conciliates the two types of observations. PMID:22194847

  4. Behavior of motor units in human biceps brachii during a submaximal fatiguing contraction.

    PubMed

    Garland, S J; Enoka, R M; Serrano, L P; Robinson, G A

    1994-06-01

    The activity of 50 single motor units was recorded in the biceps brachii muscle of human subjects while they performed submaximal isometric elbow flexion contractions that were sustained to induce fatigue. The purposes of this study were to examine the influence of fatigue on motor unit threshold force and to determine the relationship between the threshold force of recruitment and the initial interimpulse interval on the discharge rates of single motor units during a fatiguing contraction. The discharge rate of most motor units that were active from the beginning of the contraction declined during the fatiguing contraction, whereas the discharge rates of most newly recruited units were either constant or increased slightly. The absolute threshold forces of recruitment and derecruitment decreased, and the variability of interimpulse intervals increased after the fatigue task. The change in motor unit discharge rate during the fatigue task was related to the initial rate, but the direction of the change in discharge rate could not be predicted from the threshold force of recruitment or the variability in the interimpulse intervals. The discharge rate of most motor units declined despite an increase in the excitatory drive to the motoneuron pool during the fatigue task.

  5. The Patient Protection and Affordable Care Act's provisions regarding medical loss ratios and quality: evidence from Texas.

    PubMed

    Quast, Troy

    2013-01-01

    The Patient Protection and Affordable Care Act (PPACA) includes a provision that penalizes insurance companies if their Medical Loss Ratio (MLR) falls below a specified threshold. The MLR is roughly measured as the ratio of health care expenses to premiums paid by enrollees. I investigate whether there is a relationship between MLRs and the quality of care provided by insurance companies. I employ a ten-year sample of market-level financial data and quality variables for Texas insurers, as well as relevant control variables, in regression analyses that utilize insurer and market fixed effects. Of the 15 quality measures, only one has a statistically significant relationship with the MLR. For this measure, the relationship is negative. Although the MLR provision may provide incentives for insurance companies to lower premiums, this sample does not suggest that there is likely to be a beneficial effect on quality.

  6. Effects of urbanization on benthic macroinvertebrate communities in streams, Anchorage, Alaska

    USGS Publications Warehouse

    Ourso, Robert T.

    2001-01-01

    The effect of urbanization on stream macroinvertebrate communities was examined by using data gathered during a 1999 reconnaissance of 14 sites in the Municipality of Anchorage, Alaska. Data collected included macroinvertebrate abundance, water chemistry, and trace elements in bed sediments. Macroinvertebrate relative-abundance data were edited and used in metric and index calculations. Population density was used as a surrogate for urbanization. Cluster analysis (unweighted-paired-grouping method) using arithmetic means of macroinvertebrate presence-absence data showed a well-defined separation between urbanized and nonurbanized sites as well as extracted sites that did not cleanly fall into either category. Water quality in Anchorage generally declined with increasing urbanization (population density). Of 59 variables examined, 31 correlated with urbanization. Local regression analysis extracted 11 variables that showed a significant impairment threshold response and 6 that showed a significant linear response. Significant biological variables for determining the impairment threshold in this study were the Margalef diversity index, Ephemeroptera-Plecoptera-Trichoptera taxa richness, and total taxa richness. Significant thresholds were observed in the water-chemistry variables conductivity, dissolved organic carbon, potassium, and total dissolved solids. Significant thresholds in trace elements in bed sediments included arsenic, iron, manganese, and lead. Results suggest that sites in Anchorage that have ratios of population density to road density greater than 70, storm-drain densities greater than 0.45 miles per square mile, road densities greater than 4 miles per square mile, or population densities greater than 125-150 persons per square mile may require further monitoring to determine if the stream has become impaired. This population density is far less than the 1,000 persons per square mile used by the U.S. Census Bureau to define an urban area.

  7. Impact of rainfall spatial variability on Flash Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Douinot, Audrey; Roux, Hélène; Garambois, Pierre-André; Larnier, Kevin

    2014-05-01

    According to the United States National Hazard Statistics database, flooding and flash flooding have caused the largest number of deaths of any weather-related phenomenon over the last 30 years (Flash Flood Guidance Improvement Team, 2003). Like the storms that cause them, flash floods are very variable and non-linear phenomena in time and space, with the result that understanding and anticipating flash flood genesis is far from straightforward. In the U.S., the Flash Flood Guidance (FFG) estimates the average number of inches of rainfall for given durations required to produce flash flooding in the indicated county. In Europe, flash flood often occurred on small catchments (approximately 100 km2) and it has been shown that the spatial variability of rainfall has a great impact on the catchment response (Le Lay and Saulnier, 2007). Therefore, in this study, based on the Flash flood Guidance method, rainfall spatial variability information is introduced in the threshold estimation. As for FFG, the threshold is the number of millimeters of rainfall required to produce a discharge higher than the discharge corresponding to the first level (yellow) warning of the French flood warning service (SCHAPI: Service Central d'Hydrométéorologie et d'Appui à la Prévision des Inondations). The indexes δ1 and δ2 of Zoccatelli et al. (2010), based on the spatial moments of catchment rainfall, are used to characterize the rainfall spatial distribution. Rainfall spatial variability impacts on warning threshold and on hydrological processes are then studied. The spatially distributed hydrological model MARINE (Roux et al., 2011), dedicated to flash flood prediction is forced with synthetic rainfall patterns of different spatial distributions. This allows the determination of a warning threshold diagram: knowing the spatial distribution of the rainfall forecast and therefore the 2 indexes δ1 and δ2, the threshold value is read on the diagram. A warning threshold diagram is built for each studied catchment. The proposed methodology is applied on three Mediterranean catchments often submitted to flash floods. The new forecasting method as well as the Flash Flood Guidance method (uniform rainfall threshold) are tested on 25 flash floods events that had occurred on those catchments. Results show a significant impact of rainfall spatial variability. Indeed, it appears that the uniform rainfall threshold (FFG threshold) always overestimates the observed rainfall threshold. The difference between the FFG threshold and the proposed threshold ranges from 8% to 30%. The proposed methodology allows the calculation of a threshold more representative of the observed one. However, results strongly depend on the related event duration and on the catchment properties. For instance, the impact of the rainfall spatial variability seems to be correlated with the catchment size. According to these results, it seems to be interesting to introduce information on the catchment properties in the threshold calculation. Flash Flood Guidance Improvement Team, 2003. River Forecast Center (RFC) Development Management Team. Final Report. Office of Hydrologic Development (OHD), Silver Spring, Mary-land. Le Lay, M. and Saulnier, G.-M., 2007. Exploring the signature of climate and landscape spatial variabilities in flash flood events: Case of the 8-9 September 2002 Cévennes-Vivarais catastrophic event. Geophysical Research Letters, 34(L13401), doi:10.1029/2007GL029746. Roux, H., Labat, D., Garambois, P.-A., Maubourguet, M.-M., Chorda, J. and Dartus, D., 2011. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments. Nat. Hazards Earth Syst. Sci. J1 - NHESS, 11(9), 2567-2582. Zoccatelli, D., Borga, M., Zanon, F., Antonescu, B. and Stancalie, G., 2010. Which rainfall spatial information for flash flood response modelling? A numerical investigation based on data from the Carpathian range, Romania. Journal of Hydrology, 394(1-2), 148-161.

  8. Simplified 4-item criteria for polycystic ovary syndrome: A bridge too far?

    PubMed

    Indran, Inthrani R; Huang, Zhongwei; Khin, Lay Wai; Chan, Jerry K Y; Viardot-Foucault, Veronique; Yong, Eu Leong

    2018-05-30

    Although the Rotterdam 2003 polycystic ovarian syndrome (PCOS) diagnostic criteria is widely used, the need to consider multiple variables makes it unwieldy in clinical practice. We propose a simplified PCOS criteria wherein diagnosis is made if two of the following three items were present: (i) oligomenorrhoea, (ii) anti-mullerian hormone (AMH) above threshold and/or (iii) hyperandrogenism defined as either testosterone above threshold and/or the presence of hirsutism. This prospective cross-sectional study consists of healthy women (n = 157) recruited at an annual hospital health screen for staff and volunteers from the university community, and a patient cohort (n = 174) comprising women referred for suspected PCOS. We used the healthy cohort to establish threshold values for serum testosterone, antral follicle counts (AFC), ovarian volume (OV) and AMH. Women from the patient cohort, classified as PCOS by simplified PCOS criteria, AMH alone and Rotterdam 2003, were compared with respect to prevalence of oligomenorrhoea, hyperandrogenism and metabolic indices. In healthy women, testosterone ≥1.89 nmol/L, AFC ≥22 follicles and OV ≥8.44 mL, best predicted oligomenorrhoea and were used as threshold values for PCOS criteria. An AMH level ≥37.0 pmol/L best predicted polycystic ovarian morphology. AMH alone as a single biomarker demonstrated poor specificity (58.9%) for PCOS compared to Rotterdam 2003. In contrast, there was a 94% overlap in women selected as PCOS by the simplified PCOS criteria and Rotterdam 2003. The population characteristics of these two groups of PCOS women showed no significant mean differences in androgenic, ovarian, AMH and metabolic (BMI, HOMA-IR) variables. Our data recommend the simplified PCOS criteria with population-specific thresholds for diagnosis of PCOS. Its ability to replace ovarian ultrasound biometry with the highly correlated variable AMH, and use of testosterone as a single marker for hyperandrogenaemia alongside the key symptoms of oligomenorrhoea and hirsutism confers significant clinical potential for the diagnosis of PCOS. © 2018 John Wiley & Sons Ltd.

  9. Variable-Threshold Threshold Elements,

    DTIC Science & Technology

    A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the

  10. Emergency department blood transfusion: the first two units are free.

    PubMed

    Ley, Eric J; Liou, Douglas Z; Singer, Matthew B; Mirocha, James; Melo, Nicolas; Chung, Rex; Bukur, Marko; Salim, Ali

    2013-09-01

    Studies on blood product transfusions after trauma recommend targeting specific ratios to reduce mortality. Although crystalloid volumes as little as 1.5 L predict increased mortality after trauma, little data is available regarding the threshold of red blood cell (RBC) transfusion volume that predicts increased mortality. Data from a level I trauma center between January 2000 and December 2008 were reviewed. Trauma patients who received at least 100 mL RBC in the emergency department (ED) were included. Each unit of RBC was defined as 300 mL. Demographics, RBC transfusion volume, and mortality were analyzed in the nonelderly (<70 y) and elderly (≥70 y). Multivariate logistic regression was performed at various volume cutoffs to determine whether there was a threshold transfusion volume that independently predicted mortality. A total of 560 patients received ≥100 mL RBC in the ED. Overall mortality was 24.3%, with 22.5% (104 deaths) in the nonelderly and 32.7% (32 deaths) in the elderly. Multivariate logistic regression demonstrated that RBC transfusion of ≥900 mL was associated with increased mortality in both the nonelderly (adjusted odds ratio 2.06, P = 0.008) and elderly (adjusted odds ratio 5.08, P = 0.006). Although transfusion of greater than 2 units in the ED was an independent predictor of mortality, transfusion of 2 units or less was not. Interestingly, unlike crystalloid volume, stepwise increases in blood volume were not associated with stepwise increases in mortality. The underlying etiology for mortality discrepancies, such as transfusion ratios, hypothermia, or immunosuppression, needs to be better delineated. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. The impact of aging, hearing loss, and body weight on mouse hippocampal redox state, measured in brain slices using fluorescence imaging.

    PubMed

    Stebbings, Kevin A; Choi, Hyun W; Ravindra, Aditya; Llano, Daniel Adolfo

    2016-06-01

    The relationships between oxidative stress in the hippocampus and other aging-related changes such as hearing loss, cortical thinning, or changes in body weight are not yet known. We measured the redox ratio in a number of neural structures in brain slices taken from young and aged mice. Hearing thresholds, body weight, and cortical thickness were also measured. We found striking aging-related increases in the redox ratio that were isolated to the stratum pyramidale, while such changes were not observed in thalamus or cortex. These changes were driven primarily by changes in flavin adenine dinucleotide, not nicotinamide adenine dinucleotide hydride. Multiple regression analysis suggested that neither hearing threshold nor cortical thickness independently contributed to this change in hippocampal redox ratio. However, body weight did independently contribute to predicted changes in hippocampal redox ratio. These data suggest that aging-related changes in hippocampal redox ratio are not a general reflection of overall brain oxidative state but are highly localized, while still being related to at least one marker of late aging, weight loss at the end of life. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Is the Critical Shields Stress for Incipient Sediment Motion Dependent on Bed Slope in Natural Channels? No.

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Jerolmack, D. J.

    2017-12-01

    Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.

  13. Accuracy of topographic index models at identifying ephemeral gully trajectories on agricultural fields

    NASA Astrophysics Data System (ADS)

    Sheshukov, Aleksey Y.; Sekaluvu, Lawrence; Hutchinson, Stacy L.

    2018-04-01

    Topographic index (TI) models have been widely used to predict trajectories and initiation points of ephemeral gullies (EGs) in agricultural landscapes. Prediction of EGs strongly relies on the selected value of critical TI threshold, and the accuracy depends on topographic features, agricultural management, and datasets of observed EGs. This study statistically evaluated the predictions by TI models in two paired watersheds in Central Kansas that had different levels of structural disturbances due to implemented conservation practices. Four TI models with sole dependency on topographic factors of slope, contributing area, and planform curvature were used in this study. The observed EGs were obtained by field reconnaissance and through the process of hydrological reconditioning of digital elevation models (DEMs). The Kernel Density Estimation analysis was used to evaluate TI distribution within a 10-m buffer of the observed EG trajectories. The EG occurrence within catchments was analyzed using kappa statistics of the error matrix approach, while the lengths of predicted EGs were compared with the observed dataset using the Nash-Sutcliffe Efficiency (NSE) statistics. The TI frequency analysis produced bi-modal distribution of topographic indexes with the pixels within the EG trajectory having a higher peak. The graphs of kappa and NSE versus critical TI threshold showed similar profile for all four TI models and both watersheds with the maximum value representing the best comparison with the observed data. The Compound Topographic Index (CTI) model presented the overall best accuracy with NSE of 0.55 and kappa of 0.32. The statistics for the disturbed watershed showed higher best critical TI threshold values than for the undisturbed watershed. Structural conservation practices implemented in the disturbed watershed reduced ephemeral channels in headwater catchments, thus producing less variability in catchments with EGs. The variation in critical thresholds for all TI models suggested that TI models tend to predict EG occurrence and length over a range of thresholds rather than find a single best value.

  14. Prevalence and prediction of exercise-induced oxygen desaturation in patients with chronic obstructive pulmonary disease.

    PubMed

    van Gestel, A J R; Clarenbach, C F; Stöwhas, A C; Teschler, S; Russi, E W; Teschler, H; Kohler, M

    2012-01-01

    Previous studies with small sample sizes reported contradicting findings as to whether pulmonary function tests can predict exercise-induced oxygen desaturation (EID). To evaluate whether forced expiratory volume in one second (FEV(1)), resting oxygen saturation (SpO(2)) and diffusion capacity for carbon monoxide (DLCO) are predictors of EID in chronic obstructive pulmonary disease (COPD). We measured FEV(1), DLCO, SpO(2) at rest and during a 6-min walking test as well as physical activity by an accelerometer. A drop in SpO(2) of >4 to <90% was defined as EID. To evaluate associations between measures of lung function and EID univariate and multivariate analyses were used and positive/negative predictive values were calculated. Receiver operating characteristic curve analysis was performed to determine the most useful threshold in order to predict/exclude EID. We included 154 patients with COPD (87 females). The mean FEV(1) was 43.0% (19.2) predicted and the prevalence of EID was 61.7%. The only independent predictor of EID was FEV(1) and the optimal cutoff value of FEV(1) was at 50% predicted (area under ROC curve, 0.85; p < 0.001). The positive predictive value of a threshold of FEV(1) <50% was 0.83 with a likelihood ratio of 3.03 and the negative predicting value of a threshold of FEV(1) ≥80% was 1.0. The severity of EID was correlated with daily physical activity (r = -0.31, p = 0.008). EID is highly prevalent among patients with COPD and can be predicted by FEV(1). EID seems to be associated with impaired daily physical activity which supports its clinical importance. Copyright © 2012 S. Karger AG, Basel.

  15. Baseline Tumor Lipiodol Uptake after Transarterial Chemoembolization for Hepatocellular Carcinoma: Identification of a Threshold Value Predicting Tumor Recurrence.

    PubMed

    Matsui, Yusuke; Horikawa, Masahiro; Jahangiri Noudeh, Younes; Kaufman, John A; Kolbeck, Kenneth J; Farsad, Khashayar

    2017-12-01

    The aim of the study was to evaluate the association between baseline Lipiodol uptake in hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with early tumor recurrence, and to identify a threshold baseline uptake value predicting tumor response. A single-institution retrospective database of HCC treated with Lipiodol-TACE was reviewed. Forty-six tumors in 30 patients treated with a Lipiodol-chemotherapy emulsion and no additional particle embolization were included. Baseline Lipiodol uptake was measured as the mean Hounsfield units (HU) on a CT within one week after TACE. Washout rate was calculated dividing the difference in HU between the baseline CT and follow-up CT by time (HU/month). Cox proportional hazard models were used to correlate baseline Lipiodol uptake and other variables with tumor response. A receiver operating characteristic (ROC) curve was used to identify the optimal threshold for baseline Lipiodol uptake predicting tumor response. During the follow-up period (mean 5.6 months), 19 (41.3%) tumors recurred (mean time to recurrence = 3.6 months). In a multivariate model, low baseline Lipiodol uptake and higher washout rate were significant predictors of early tumor recurrence ( P = 0.001 and < 0.0001, respectively). On ROC analysis, a threshold Lipiodol uptake of 270.2 HU was significantly associated with tumor response (95% sensitivity, 93% specificity). Baseline Lipiodol uptake and washout rate on follow-up were independent predictors of early tumor recurrence. A threshold value of baseline Lipiodol uptake > 270.2 HU was highly sensitive and specific for tumor response. These findings may prove useful for determining subsequent treatment strategies after Lipiodol TACE.

  16. The Biological and Toxicological Activity of Gases and Vapors

    PubMed Central

    Sánchez-Moreno, Ricardo; Gil-Lostes, Javier; Acree, William E.; Cometto-Muñiz, J. Enrique; Cain, William S.

    2010-01-01

    A large amount of data on the biological and toxicological activity of gases and vapors has been collected from the literature. Processes include sensory irritation thresholds, the Alarie mouse test, inhalation anesthesia, etc. It is shown that a single equation using only five descriptors (properties of the gases and vapors) plus a set of indicator variables for the given processes can correlate 643 biological and non-lethal toxicological activities of ‘non-reactive’ compounds with a standard deviation of 0.36 log unit. The equation is scaled to sensory irritation thresholds obtained by the procedure of Cometto-Muñiz, and Cain, and provides a general equation for the prediction of sensory irritation thresholds in man. It is suggested that differences in biological/toxicological activity arise primarily from transport from the gas phase to a receptor phase or area, except for odor detection thresholds where interaction with a receptor(s) is important. PMID:19913608

  17. Precipitation phase partitioning variability across the Northern Hemisphere

    NASA Astrophysics Data System (ADS)

    Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.

    2017-12-01

    Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.

  18. How to interpret a small increase in AUC with an additional risk prediction marker: decision analysis comes through.

    PubMed

    Baker, Stuart G; Schuit, Ewoud; Steyerberg, Ewout W; Pencina, Michael J; Vickers, Andrew; Vickers, Andew; Moons, Karel G M; Mol, Ben W J; Lindeman, Karen S

    2014-09-28

    An important question in the evaluation of an additional risk prediction marker is how to interpret a small increase in the area under the receiver operating characteristic curve (AUC). Many researchers believe that a change in AUC is a poor metric because it increases only slightly with the addition of a marker with a large odds ratio. Because it is not possible on purely statistical grounds to choose between the odds ratio and AUC, we invoke decision analysis, which incorporates costs and benefits. For example, a timely estimate of the risk of later non-elective operative delivery can help a woman in labor decide if she wants an early elective cesarean section to avoid greater complications from possible later non-elective operative delivery. A basic risk prediction model for later non-elective operative delivery involves only antepartum markers. Because adding intrapartum markers to this risk prediction model increases AUC by 0.02, we questioned whether this small improvement is worthwhile. A key decision-analytic quantity is the risk threshold, here the risk of later non-elective operative delivery at which a patient would be indifferent between an early elective cesarean section and usual care. For a range of risk thresholds, we found that an increase in the net benefit of risk prediction requires collecting intrapartum marker data on 68 to 124 women for every correct prediction of later non-elective operative delivery. Because data collection is non-invasive, this test tradeoff of 68 to 124 is clinically acceptable, indicating the value of adding intrapartum markers to the risk prediction model. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Evaluation of the stability indices for the thunderstorm forecasting in the region of Belgrade, Serbia

    NASA Astrophysics Data System (ADS)

    Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.

    2015-07-01

    The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.

  1. Investigation of quartz diagenesis in mudstones of the Spraberry and Wolfcamp Formations

    NASA Astrophysics Data System (ADS)

    Eakin, A.; Reece, J. S.

    2016-12-01

    Here we present preliminary core analysis of the diagenetic variability existing within a siliceous mudstone facies of the Permian Spraberry and Wolfcamp Formations in the Midland Basin, Texas. Within this mudstone facies, the carbonate content varies from absent in several Wolfcamp Formation samples to >40 wt. % in the Spraberry Formation. A normalized ratio of quartz to clay content with carbonate removed reveals a systematic decrease in quartz content with increasing clay content. This relationship is typical of rocks with variable amounts of detrital quartz content. However, in this siliceous mudstone facies, the abundance of detrital quartz silt grains does not vary widely. Additionally, for the same clay content, the Wolfcamp Formation shows a higher concentration of quartz than the Spraberry Formation. Scanning electron microscopy (SEM) reveals the presence of microcrystalline quartz cement that likely accounts for the increased quartz content in the Wolfcamp Formation. This research tests the hypothesis that the increased quartz cement in the Wolfcamp Formation may occur at the expense of the carbonate cement present in the overlying Spraberry Formation. Furthermore, the deviation in quartz content for the same clay concentration only occurs once the ratio of quartz to clay content increases beyond 1.2. This ratio may represent a threshold of detrital quartz in the clay matrix required to have enough porosity and nucleation surface area for authigenic quartz growth. The presence of matrix cement may impact the mechanical properties to favor fracturing and cataclasis over more ductile deformation. This would enhance development of secondary porosity, while also increasing permeability through the connection of primary pores. Acquiring a fundamental understanding of diagenesis in the Spraberry and Wolfcamp Formations will aid in better prediction of mechanical behavior during drilling and optimized resource recovery.

  2. Numerical investigation of the inertial cavitation threshold by dual-frequency excitation in the fluid and tissue.

    PubMed

    Wang, Mingjun; Zhou, Yufeng

    2018-04-01

    Inertial cavitation thresholds, which are defined as bubble growth by 2-fold from the equilibrium radius, by two types of ultrasonic excitation (at the classical single-frequency mode and dual-frequency mode) were calculated. The effect of the dual-frequency excitation on the inertial cavitation threshold in the different surrounding media (fluid and tissue) was studied, and the paramount parameters (driving frequency, amplitude ratio, phase difference, and frequency ratio) were also optimized to maximize the inertial cavitation. The numerical prediction confirms the previous experimental results that the dual-frequency excitation is capable of reducing the inertial cavitation threshold in comparison to the single-frequency one at the same output power. The dual-frequency excitation at the high frequency (i.e., 3.1 + 3.5 MHz vs. 1.1 + 1.3 MHz) is preferred in this study. The simulation results suggest that the same amplitudes of individual components, zero phase difference, and large frequency difference are beneficial for enhancing the bubble cavitation. Overall, this work may provide a theoretical model for further investigation of dual-frequency excitation and guidance of its applications for a better outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Pre-operative Thresholds for Achieving Meaningful Clinical Improvement after Arthroscopic Treatment of Femoroacetabular Impingement

    PubMed Central

    Nwachukwu, Benedict U.; Fields, Kara G.; Nawabi, Danyal H.; Kelly, Bryan T.; Ranawat, Anil S.

    2016-01-01

    Objectives: Knowledge of the thresholds and determinants for successful femoroacetabular impingement (FAI) treatment is evolving. The primary purpose of this study was to define pre-operative outcome score thresholds that can be used to predict patients most likely to achieve meaningful clinically important difference (MCID) after arthroscopic FAI treatment. Secondarily determinants of achieving MCID were evaluated. Methods: A prospective institutional hip arthroscopy registry was reviewed to identify patients with FAI treated with arthroscopic labral surgery, acetabular rim trimming, and femoral osteochondroplasty. The modified Harris Hip Score (mHHS), the Hip Outcome Score (HOS) and the international Hip Outcome Tool (iHOT-33) tools were administered at baseline and at one year post-operatively. MCID was calculated using a distribution-based method. A receiver operating characteristic (ROC) analysis was used to calculate cohort-based threshold values predictive of achieving MCID. Area under the curve (AUC) was used to define predictive ability (strength of association) with AUC >0.7 considered acceptably predictive. Univariate and multivariable analyses were used to analyze demographic, radiographic and intra-operative factors associated with achieving MCID. Results: There were 374 patients (mean + SD age, 32.9 + 10.5) and 56.4% were female. The MCID for mHHS, HOS activities of daily living (HOS-ADL), HOS Sports, and iHOT-33 was 8.2, 8.4,14.5, and 12.0 respectively. ROC analysis (threshold, % achieving MCID, strength of association) for these tools in our population was: mHHS (61.6, 78%, 0.68), HOS-ADL (83.8, 68%, 0.84), HOS-Sports (63.9, 64%, 0.74), and iHOT-33 (54.3, 82%, 0.65). Likelihood for achieving MCID declined above and increased below these thresholds. In univariate analysis female sex, femoral version, lower acetabular outerbridge score and increasing CT sagittal center edge angle (CEA) were predictive of achieving MCID. In multivariable analysis sagittal CEA was the only variable maintaining significance (p = 0.032). Conclusion: We used a large prospective hip arthroscopy database to identify pre-operative patient outcome score thresholds predictive of meaningful post-operative outcome improvement after arthroscopic FAI treatment. This is the largest reported hip arthroscopy cohort to define MCID and the first to do so for iHOT-33. The HOS-ADL may have the best predictive ability for achieving MCID after hip arthroscopy. Patients with relatively high pre-operative ADL, quality of life and functional status appear to have a high chance for achieveing MCID up to our defined thresholds. Hip dysplasia is an important outcome modifier. The findings of this study may be useful for managing preoperative expectation for patients undergoing arthroscopic FAI surgery.

  4. Computed Tomography Aortic Valve Calcium Scoring in Patients With Aortic Stenosis.

    PubMed

    Pawade, Tania; Clavel, Marie-Annick; Tribouilloy, Christophe; Dreyfus, Julien; Mathieu, Tiffany; Tastet, Lionel; Renard, Cedric; Gun, Mesut; Jenkins, William Steven Arthur; Macron, Laurent; Sechrist, Jacob W; Lacomis, Joan M; Nguyen, Virginia; Galian Gay, Laura; Cuéllar Calabria, Hug; Ntalas, Ioannis; Cartlidge, Timothy Robert Graham; Prendergast, Bernard; Rajani, Ronak; Evangelista, Arturo; Cavalcante, João L; Newby, David E; Pibarot, Philippe; Messika Zeitoun, David; Dweck, Marc R

    2018-03-01

    Computed tomography aortic valve calcium scoring (CT-AVC) holds promise for the assessment of patients with aortic stenosis (AS). We sought to establish the clinical utility of CT-AVC in an international multicenter cohort of patients. Patients with AS who underwent ECG-gated CT-AVC within 3 months of echocardiography were entered into an international, multicenter, observational registry. Optimal CT-AVC thresholds for diagnosing severe AS were determined in patients with concordant echocardiographic assessments, before being used to arbitrate disease severity in those with discordant measurements. In patients with long-term follow-up, we assessed whether CT-AVC thresholds predicted aortic valve replacement and death. In 918 patients from 8 centers (age, 77±10 years; 60% men; peak velocity, 3.88±0.90 m/s), 708 (77%) patients had concordant echocardiographic assessments, in whom CT-AVC provided excellent discrimination for severe AS (C statistic: women 0.92, men 0.89). Our optimal sex-specific CT-AVC thresholds (women 1377 Agatston unit and men 2062 Agatston unit) were nearly identical to those previously reported (women 1274 Agatston unit and men 2065 Agatston unit). Clinical outcomes were available in 215 patients (follow-up 1029 [126-2251] days). Sex-specific CT-AVC thresholds independently predicted aortic valve replacement and death (hazard ratio, 3.90 [95% confidence interval, 2.19-6.78]; P <0.001) after adjustment for age, sex, peak velocity, and aortic valve area. Among 210 (23%) patients with discordant echocardiographic assessments, there was considerable heterogeneity in CT-AVC scores, which again were an independent predictor of clinical outcomes (hazard ratio, 3.67 [95% confidence interval, 1.39-9.73]; P =0.010). Sex-specific CT-AVC thresholds accurately identify severe AS and provide powerful prognostic information. These findings support their integration into routine clinical practice. URL: http://www.clinicaltrials.gov. Unique identifiers: NCT01358513, NCT02132026, NCT00338676, NCT00647088, NCT01679431. © 2018 American Heart Association, Inc.

  5. Evaluation of NO2 predictions by the plume volume molar ratio method (PVMRM) and ozone limiting method (OLM) in AERMOD using new field observations.

    PubMed

    Hendrick, Elizabeth M; Tino, Vincent R; Hanna, Steven R; Egan, Bruce A

    2013-07-01

    The U.S. Environmental Protection Agency (EPA) plume volume molar ratio method (PVMRM) and the ozone limiting method (OLM) are in the AERMOD model to predict the 1-hr average NO2/NO(x) concentration ratio. These ratios are multiplied by the AERMOD predicted NO(x) concentration to predict the 1-hr average NO2 concentration. This paper first briefly reviews PVMRM and OLM and points out some scientific parameterizations that could be improved (such as specification of relative dispersion coefficients) and then discusses an evaluation of the PVMRM and OLM methods as implemented in AERMOD using a new data set. While AERMOD has undergone many model evaluation studies in its default mode, PVMRM and OLM are nondefault options, and to date only three NO2 field data sets have been used in their evaluations. Here AERMOD/PVMRM and AERMOD/OLM codes are evaluated with a new data set from a northern Alaskan village with a small power plant. Hourly pollutant concentrations (NO, NO2, ozone) as well as meteorological variables were measured at a single monitor 500 m from the plant. Power plant operating parameters and emissions were calculated based on hourly operator logs. Hourly observations covering 1 yr were considered, but the evaluations only used hours when the wind was in a 60 degrees sector including the monitor and when concentrations were above a threshold. PVMRM is found to have little bias in predictions of the C(NO2)/C(NO(x)) ratio, which mostly ranged from 0.2 to 0.4 at this site. OLM overpredicted the ratio. AERMOD overpredicts the maximum NO(x) concentration but has an underprediction bias for lower concentrations. AERMOD/PVMRM overpredicts the maximum C(NO2) by about 50%, while AERMOD/OLM overpredicts by a factor of 2. For 381 hours evaluated, there is a relative mean bias in C(NO2) predictions of near zero for AERMOD/PVMRM, while the relative mean bias reflects a factor of 2 overprediction for AERMOD/OLM. This study was initiated because the new stringent 1-hr NO2 NAAQS has prompted modelers to more widely use the PVMRM and OLM methods for conversion of NO(x) to NO2 in the AERMOD regulatory model. To date these methods have been evaluated with a limited number of data sets. This study identified a new data set of ambient pollutant and meteorological monitoring near an isolated power plant in Wainwright, Alaska. To supplement the existing evaluations, this new data were used to evaluate PVMRM and OLM. This new data set has been and will be made available to other scientists for future investigations.

  6. Using instrumental (CIE and reflectance) measures to predict consumers' acceptance of beef colour.

    PubMed

    Holman, Benjamin W B; van de Ven, Remy J; Mao, Yanwei; Coombs, Cassius E O; Hopkins, David L

    2017-05-01

    We aimed to establish colorimetric thresholds based upon the capacity for instrumental measures to predict consumer satisfaction with beef colour. A web-based survey was used to distribute standardised photographs of beef M. longissimus lumborum with known colorimetrics (L*, a*, b*, hue, chroma, ratio of reflectance at 630nm and 580nm, and estimated deoxymyoglobin, oxymyoglobin and metmyoglobin concentrations) for scrutiny. Consumer demographics and perceived importance of colour to beef value were also evaluated. It was found that a* provided the most simple and robust prediction of beef colour acceptability. Beef colour was considered acceptable (with 95% acceptance) when a* values were equal to or above 14.5. Demographic effects on this threshold were negligible, but consumer nationality and gender did contribute to variation in the relative importance of colour to beef value. These results provide future beef colour studies with context to interpret objective colour measures in terms of consumer acceptance and market appeal. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  7. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  8. Predictive Variables of Half-Marathon Performance for Male Runners

    PubMed Central

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A.; García-López, Juan

    2017-01-01

    The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO2max, speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance. Key points The present study obtained four equations involving anthropometric, training, physiological and biomechanical variables to estimate half-marathon performance. These equations were validated in a different population, demonstrating narrows ranges of prediction than previous studies and also their consistency. As a novelty, some biomechanical variables (i.e. step length and step rate at RCT, and maximal step length) have been related to half-marathon performance. PMID:28630571

  9. Speckle Noise Reduction in Optical Coherence Tomography Using Two-dimensional Curvelet-based Dictionary Learning.

    PubMed

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2017-01-01

    The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.

  10. Between-airport heterogeneity in air toxics emissions associated with individual cancer risk thresholds and population risks

    PubMed Central

    2009-01-01

    Background Airports represent a complex source type of increasing importance contributing to air toxics risks. Comprehensive atmospheric dispersion models are beyond the scope of many applications, so it would be valuable to rapidly but accurately characterize the risk-relevant exposure implications of emissions at an airport. Methods In this study, we apply a high resolution atmospheric dispersion model (AERMOD) to 32 airports across the United States, focusing on benzene, 1,3-butadiene, and benzo [a]pyrene. We estimate the emission rates required at these airports to exceed a 10-6 lifetime cancer risk for the maximally exposed individual (emission thresholds) and estimate the total population risk at these emission rates. Results The emission thresholds vary by two orders of magnitude across airports, with variability predicted by proximity of populations to the airport and mixing height (R2 = 0.74–0.75 across pollutants). At these emission thresholds, the population risk within 50 km of the airport varies by two orders of magnitude across airports, driven by substantial heterogeneity in total population exposure per unit emissions that is related to population density and uncorrelated with emission thresholds. Conclusion Our findings indicate that site characteristics can be used to accurately predict maximum individual risk and total population risk at a given level of emissions, but that optimizing on one endpoint will be non-optimal for the other. PMID:19426510

  11. Proton MR spectroscopy in predicting the increase of perfusion MR imaging for WHO grade II gliomas.

    PubMed

    Guillevin, Remy; Menuel, Carole; Abud, Lucas; Costalat, Robert; Capelle, Laurent; Hoang-Xuan, Khê; Habas, Christophe; Chiras, Jacques; Vallée, Jean-Noel

    2012-03-01

    To investigate the correlation between the metabolite ratios obtained from proton magnetic resonance (MR) spectroscopy and those obtained from MR perfusion parameters (relative cerebral blood volume [rCBV]) in a cohort of low-grade glioma (LGG). Patients underwent prospectively conventional MR, proton magnetic resonance spectroscopy ((1) HMRS), and perfusion-weighted images (PWI). Statistical analyses were performed to determine the correlative and independent predictive factors of rCBVmax and the metabolite ratio thresholds with optimum sensitivity and specificity. Thirty-one patients were included in this study. Linear correlations were observed between the metabolic ratios (lactate [Lac]/creatine [Cr], choline [Cho]/N-acetyl-aspartate [NAA], free-lipids/Cr) and rCBVmax (P < 0.05). These metabolic ratios were determined to be independent predictive factors of rCBVmax (P = 0.027, 0.011 and 0.032, respectively). According to the receiver operating characteristic curves, the cutoff values of the metabolic ratios to discriminate between the two populations of rCBVmax (<1.7 versus = 1.7) were 1.72, 1.54, and 1.40, respectively, with a sensitivity = 75% and a specificity >95% for Lac/Cr. This study demonstrated consistent correlations between the data from (1) HMRS and PWI. The Lac/Cr ratio predicts regional hemodynamic changes, which are themselves a useful biomarker of clinical prognosis in patients with LGG. As such, this ratio may provide a new parameter for making improved clinical decisions. Copyright © 2011 Wiley-Liss, Inc.

  12. Physical function interfering with pain and symptoms in fibromyalgia patients.

    PubMed

    Assumpção, A; Sauer, J F; Mango, P C; Pascual Marques, A

    2010-01-01

    The aim of this study was to assess the relationship between variables of physical assessment - muscular strength, flexibility and dynamic balance - with pain, pain threshold, and fibromyalgia symptoms (FM). Our sample consists of 55 women, with age ranging from 30 to 55 years (mean of 46.5, (standard deviation, SD=6.6)), mean body mass index (BMI) of 28.7 (3.8) and diagnosed for FM according to the American College of Rheumatology criteria. Pain intensity was measured using a visual analogue scale (VAS) and pain threshold (PT) using Fisher's dolorimeter. FM symptoms were assessed by the Fibromyalgia Impact Questionnaire (FIQ); flexibility by the third finger to floor test (3FF); the muscular strength index (MSI) by the maximum volunteer isometric contraction at flexion and extension of right knee and elbow using a force transducer, dynamic balance by the time to get up and go (TUG) test and the functional reach test (FRT). Data were analysed using Pearson's correlation, as well as simple and multivariate regression tests, with significance level of 5%. PT and FIQ were weakly but significantly correlated with the TUG, MSI and 3FF as well as VAS with the TUG and MSI (p<0.05). VAS, PT and FIQ was not correlated with FRT. Simple regression suggests that, alone, TUG, FR, MSI and 3FF are low predictors of VAS, PT and FIQ. For the VAS, the best predictive model includes TUG and MSI, explaining 12.6% of pain variability. For TP and total symptoms, as obtained by the FIQ, most predictive model includes 3FF and MSI, which respectively respond by 30% and 21% of the variability. Muscular strength, flexibility and balance are associated with pain, pain threshold, and symptoms in FM patients.

  13. Sex allocation and interactions between relatives in the bean beetle, Callosobruchus maculatus.

    PubMed

    Reece, Sarah E; Wherry, Ruth N; Bloor, Juliette M G

    2005-11-01

    When a small number of females contribute offspring to a discrete mating group, sex allocation (Local Mate Competition: LMC) theory predicts that females should bias their offspring sex ratio towards daughters, which avoids the fitness costs of their sons competing with each other. Conversely, when a large number of females contribute offspring to a patch, they are expected to invest equally in sons and daughters. Furthermore, sex ratios of species that regularly experience variable foundress numbers are closer to those predicted by LMC theory than species that encounter less variable foundress number scenarios. Due to their patterns of resource use, female Callosobruchus maculatus are likely to experience a broad range of foundress number scenarios. We carried out three experiments to test whether female C. maculatus adjust their sex ratios in response to foundress number and two other indicators of LMC: ovipositing on pre-parasitised patches and ovipositing with sisters. We did not find any evidence of the predicted sex ratio adjustment, but we did find evidence of kin biased behaviour.

  14. Cannabinoid-induced effects on the nociceptive system: a neurophysiological study in patients with secondary progressive multiple sclerosis.

    PubMed

    Conte, Antonella; Bettolo, Chiara Marini; Onesti, Emanuela; Frasca, Vittorio; Iacovelli, Elisa; Gilio, Francesca; Giacomelli, Elena; Gabriele, Maria; Aragona, Massimiliano; Tomassini, Valentina; Pantano, Patrizia; Pozzilli, Carlo; Inghilleri, Maurizio

    2009-05-01

    Although clinical studies show that cannabinoids improve central pain in patients with multiple sclerosis (MS) neurophysiological studies are lacking to investigate whether they also suppress these patients' electrophysiological responses to noxious stimulation. The flexion reflex (FR) in humans is a widely used technique for assessing the pain threshold and for studying spinal and supraspinal pain pathways and the neurotransmitter system involved in pain control. In a randomized, double-blind, placebo-controlled, cross-over study we investigated cannabinoid-induced changes in RIII reflex variables (threshold, latency and area) in a group of 18 patients with secondary progressive MS. To investigate whether cannabinoids act indirectly on the nociceptive reflex by modulating lower motoneuron excitability we also evaluated the H-reflex size after tibial nerve stimulation and calculated the H wave/M wave (H/M) ratio. Of the 18 patients recruited and randomized 17 completed the study. After patients used a commercial delta-9-tetrahydrocannabinol (THC) and cannabidiol mixture as an oromucosal spray the RIII reflex threshold increased and RIII reflex area decreased. The visual analogue scale score for pain also decreased, though not significantly. Conversely, the H/M ratio measured before patients received cannabinoids remained unchanged after therapy. In conclusion, the cannabinoid-induced changes in the RIII reflex threshold and area in patients with MS provide objective neurophysiological evidence that cannabinoids modulate the nociceptive system in patients with MS.

  15. The fecal hemoglobin concentration, age and sex test score: Development and external validation of a simple prediction tool for colorectal cancer detection in symptomatic patients.

    PubMed

    Cubiella, Joaquín; Digby, Jayne; Rodríguez-Alonso, Lorena; Vega, Pablo; Salve, María; Díaz-Ondina, Marta; Strachan, Judith A; Mowat, Craig; McDonald, Paula J; Carey, Francis A; Godber, Ian M; Younes, Hakim Ben; Rodriguez-Moranta, Francisco; Quintero, Enrique; Álvarez-Sánchez, Victoria; Fernández-Bañares, Fernando; Boadas, Jaume; Campo, Rafel; Bujanda, Luis; Garayoa, Ana; Ferrandez, Ángel; Piñol, Virginia; Rodríguez-Alcalde, Daniel; Guardiola, Jordi; Steele, Robert J C; Fraser, Callum G

    2017-05-15

    Prediction models for colorectal cancer (CRC) detection in symptomatic patients, based on easily obtainable variables such as fecal haemoglobin concentration (f-Hb), age and sex, may simplify CRC diagnosis. We developed, and then externally validated, a multivariable prediction model, the FAST Score, with data from five diagnostic test accuracy studies that evaluated quantitative fecal immunochemical tests in symptomatic patients referred for colonoscopy. The diagnostic accuracy of the Score in derivation and validation cohorts was compared statistically with the area under the curve (AUC) and the Chi-square test. 1,572 and 3,976 patients were examined in these cohorts, respectively. For CRC, the odds ratio (OR) of the variables included in the Score were: age (years): 1.03 (95% confidence intervals (CI): 1.02-1.05), male sex: 1.6 (95% CI: 1.1-2.3) and f-Hb (0-<20 µg Hb/g feces): 2.0 (95% CI: 0.7-5.5), (20-<200 µg Hb/g): 16.8 (95% CI: 6.6-42.0), ≥200 µg Hb/g: 65.7 (95% CI: 26.3-164.1). The AUC for CRC detection was 0.88 (95% CI: 0.85-0.90) in the derivation and 0.91 (95% CI: 0.90-093; p = 0.005) in the validation cohort. At the two Score thresholds with 90% (4.50) and 99% (2.12) sensitivity for CRC, the Score had equivalent sensitivity, although the specificity was higher in the validation cohort (p < 0.001). Accordingly, the validation cohort was divided into three groups: high (21.4% of the cohort, positive predictive value-PPV: 21.7%), intermediate (59.8%, PPV: 0.9%) and low (18.8%, PPV: 0.0%) risk for CRC. The FAST Score is an easy to calculate prediction tool, highly accurate for CRC detection in symptomatic patients. © 2017 UICC.

  16. An assessment of thromboelastometry to monitor blood coagulation and guide transfusion support in liver transplantation.

    PubMed

    Blasi, Annabel; Beltran, Joan; Pereira, Arturo; Martinez-Palli, Graciela; Torrents, Abiguei; Balust, Jaume; Zavala, Elizabeth; Taura, Pilar; Garcia-Valdecasas, Juan-Carlos

    2012-09-01

    Rotation thromboelastometry (TEM) has been proposed as a convenient alternative to standard coagulation tests in guiding the treatment of coagulopathy during orthotopic liver transplantation (OLT). This study was aimed at assessing the value of TEM in monitoring blood coagulation and guide transfusion support in OLT. Standard coagulation and TEM (EXTEM and FIBTEM) tests were performed at four preestablished intraoperative time points in 236 OLTs and prospectively recorded in a dedicated database together with the main operative and transfusion data. Transfusion thresholds were based on standard coagulation tests. Spearman's rank correlation (ρ), linear regression, and receiver operating characteristic curves were used when appropriate. EXTEM maximum clot firmness (MCF(EXTEM)) was the TEM variable that best correlated with the platelet (PLT) and fibrinogen levels (ρ = 0.62 and ρ = 0.69, respectively). MCF(FIBTEM) correlated with fibrinogen level (ρ = 0.70). EXTEM clot amplitude at 10 minutes (A10(EXTEM)) was a good linear predictor of MCF(EXTEM) (R(2) =0.93). The cutoff values that best predicted the transfusion threshold for PLTs and fibrinogen were A10(EXTEM) = 35 mm and A10(FIBTEM) = 8 mm. At these values, the negative and positive predictive accuracies of TEM to predict the transfusion thresholds were 95 and 27%, respectively. A10(EXTEM) is an adequate TEM variable to guide therapeutic decisions during OLT. Patients with A10(EXTEM) of greater than 35 mm are unlikely to bleed because of coagulation deficiencies, but using A10(EXTEM) of not more than 35 mm as the sole transfusion criterion can lead to unnecessary utilization of PLTs and fibrinogen-rich products. © 2012 American Association of Blood Banks.

  17. Seabirds as indicators of marine food supplies: Cairns revisited

    USGS Publications Warehouse

    Piatt, John F.; Harding, Ann M.A.; Shultz, Michael T.; Speckman, Suzann G.; van Pelt, Thomas I.; Drew, Gary S.; Kettle, Arthur B.

    2007-01-01

    In his seminal paper about using seabirds as indicators of marine food supplies, Cairns (1987, Biol Oceanogr 5:261–271) predicted that (1) parameters of seabird biology and behavior would vary in curvilinear fashion with changes in food supply, (2) the threshold of prey density over which birds responded would be different for each parameter, and (3) different seabird species would respond differently to variation in food availability depending on foraging behavior and ability to adjust time budgets. We tested these predictions using data collected at colonies of common murre Uria aalge and black-legged kittiwake Rissa tridactyla in Cook Inlet, Alaska. (1) Of 22 seabird responses fitted with linear and non-linear functions, 16 responses exhibited significant curvilinear shapes, and Akaike’s information criterion (AIC) analysis indicated that curvilinear functions provided the best-fitting model for 12 of those. (2) However, there were few differences among parameters in their threshold to prey density, presumably because most responses ultimately depend upon a single threshold for prey acquisition at sea. (3) There were similarities and some differences in how species responded to variability in prey density. Both murres and kittiwakes minimized variability (CV < 15%) in their own body condition and growth of chicks in the face of high annual variability (CV = 69%) in local prey density. Whereas kittiwake breeding success (CV = 63%, r2 = 0.89) reflected prey variability, murre breeding success did not (CV = 29%, r2< 0.00). It appears that murres were able to buffer breeding success by reallocating discretionary ‘loafing’ time to foraging effort in response (r2 = 0.64) to declining prey density. Kittiwakes had little or no discretionary time, so fledging success was a more direct function of local prey density. Implications of these results for using ‘seabirds as indicators’ are discussed.

  18. Drug Concentration Thresholds Predictive of Therapy Failure and Death in Children With Tuberculosis: Bread Crumb Trails in Random Forests

    PubMed Central

    Swaminathan, Soumya; Pasipanodya, Jotam G.; Ramachandran, Geetha; Hemanth Kumar, A. K.; Srivastava, Shashikant; Deshpande, Devyani; Nuermberger, Eric; Gumbo, Tawanda

    2016-01-01

    Background. The role of drug concentrations in clinical outcomes in children with tuberculosis is unclear. Target concentrations for dose optimization are unknown. Methods. Plasma drug concentrations measured in Indian children with tuberculosis were modeled using compartmental pharmacokinetic analyses. The children were followed until end of therapy to ascertain therapy failure or death. An ensemble of artificial intelligence algorithms, including random forests, was used to identify predictors of clinical outcome from among 30 clinical, laboratory, and pharmacokinetic variables. Results. Among the 143 children with known outcomes, there was high between-child variability of isoniazid, rifampin, and pyrazinamide concentrations: 110 (77%) completed therapy, 24 (17%) failed therapy, and 9 (6%) died. The main predictors of therapy failure or death were a pyrazinamide peak concentration <38.10 mg/L and rifampin peak concentration <3.01 mg/L. The relative risk of these poor outcomes below these peak concentration thresholds was 3.64 (95% confidence interval [CI], 2.28–5.83). Isoniazid had concentration-dependent antagonism with rifampin and pyrazinamide, with an adjusted odds ratio for therapy failure of 3.00 (95% CI, 2.08–4.33) in antagonism concentration range. In regard to death alone as an outcome, the same drug concentrations, plus z scores (indicators of malnutrition), and age <3 years, were highly ranked predictors. In children <3 years old, isoniazid 0- to 24-hour area under the concentration-time curve <11.95 mg/L × hour and/or rifampin peak <3.10 mg/L were the best predictors of therapy failure, with relative risk of 3.43 (95% CI, .99–11.82). Conclusions. We have identified new antibiotic target concentrations, which are potential biomarkers associated with treatment failure and death in children with tuberculosis. PMID:27742636

  19. Exposure–response relationships for the ACGIH threshold limit value for hand-activity level: results from a pooled data study of carpal tunnel syndrome

    PubMed Central

    Kapellusch, Jay M; Gerr, Frederic E; Malloy, Elizabeth J; Garg, Arun; Harris-Adamson, Carisa; Bao, Stephen S; Burt, Susan E; Dale, Ann Marie; Eisen, Ellen A; Evanoff, Bradley A; Hegmann, Kurt T; Silverstein, Barbara A; Theise, Matthew S; Rempel, David M

    2014-01-01

    Objective This paper aimed to quantify exposure–response relationships between the American Conference of Governmental Industrial Hygienists’ (ACGIH) threshold limit value (TLV) for hand-activity level (HAL) and incidence of carpal tunnel syndrome (CTS). Methods Manufacturing and service workers previously studied by six research institutions had their data combined and re-analyzed. CTS cases were defined by symptoms and abnormal nerve conduction. Hazard ratios (HR) were calculated using proportional hazards regression after adjusting for age, gender, body mass index, and CTS predisposing conditions. Results The longitudinal study comprised 2751 incident-eligible workers, followed prospectively for up to 6.4 years and contributing 6243 person-years of data. Associations were found between CTS and TLV for HAL both as a continuous variable [HR 1.32 per unit, 95% confidence interval (95% CI) 1.11–1.57] and when categorized using the ACGIH action limit (AL) and TLV. Those between the AL and TLV and above the TLV had HR of 1.7 (95% CI 1.2–2.5) and 1.5 (95% CI 1.0–2.1), respectively. As independent variables (in the same adjusted model) the HR for peak force (PF) and HAL were 1.14 per unit (95% CI 1.05–1.25), and 1.04 per unit (95% CI 0.93–1.15), respectively. Conclusion Those with exposures above the AL were at increased risk of CTS, but there was no further increase in risk for workers above the TLV. This suggests that the current AL may not be sufficiently protective of workers. Combinations of PF and HAL are useful for predicting risk of CTS. PMID:25266844

  20. Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung

    2018-04-01

    Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.

  1. Prediction of clinical infection in women with preterm labour with intact membranes: a score based on ultrasonographic, clinical and biological markers.

    PubMed

    Kayem, Gilles; Maillard, Françoise; Schmitz, Thomas; Jarreau, Pierre H; Cabrol, Dominique; Breart, Gérard; Goffinet, François

    2009-07-01

    To predict maternal and neonatal clinical infection at admission in women hospitalized for preterm labour (PTL) with intact membranes. Prospective study of 371 women hospitalized for preterm labour with intact membranes. The primary outcome was clinical infection, defined by clinical chorioamnionitis at delivery or early-onset neonatal infection. Clinical infection was identified in 21 cases (5.7%) and was associated with earlier gestational age at admission for PTL, elevated maternal C-reactive protein (CRP) and white blood cell count (WBC), shorter cervical length, and a cervical funnelling on ultrasound. We used ROC curves to determine the cut-off values that minimized the number of false positives and false negatives. The cut-off points chosen were 30 weeks for gestational age at admission, 25 mm for cervical length, 8 mg/l for CRP and 12,000 c/mm(3) for WBC. Each of these variables was assigned a weight on the basis of the adjusted odds ratios in a clinical infection risk score (CIRS). We set a threshold corresponding to a specificity close to 90%, and calculated the positive and negative predictive values and likelihood ratios of each marker and of the CIRS. The CIRS had a sensitivity of 61.9%, while the sensitivity of the other markers ranged from 19.0% to 42.9%. Internal cross-validation was used to estimate the performance of the CIRS in new subjects. The diagnostic values found remained close to the initial values. A clinical infection risk score built from data known at admission for preterm labour helps to identify women and newborns at high risk of clinical infection.

  2. Neural activity in cortical area V4 underlies fine disparity discrimination.

    PubMed

    Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro

    2012-03-14

    Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.

  3. Diagnostic capability of scanning laser polarimetry with variable cornea compensator in Indian patients with early primary open-angle glaucoma.

    PubMed

    Parikh, Rajul S; Parikh, Shefali R; Kumar, Rajesh S; Prabakaran, S; Babu, J Gansesh; Thomas, Ravi

    2008-07-01

    To evaluate the diagnostic ability of scanning laser polarimetry (GDx variable corneal compensator [VCC]) for early glaucoma in Asian Indian eyes. Cross-sectional observational study. Two groups of patients (early glaucoma and normal) who satisfied the inclusion and exclusion criteria were included. Early glaucoma was diagnosed in presence of open angles, characteristic glaucomatous optic disc changes correlating with the visual field (VF) on automated perimetry (VF defect fulfilling at least 2 of 3 Anderson and Patella's criteria with mean deviation >or= -6 decibels). Normal subjects had visual acuity >or= 20/30 and intraocular pressure < 22 mmHg, with a normal optic disc and fields and no ocular abnormality. All patients underwent complete ophthalmic evaluation, including VF examination (24-2/30-2 Swedish interactive threshold algorithm standard program) and imaging with GDx VCC. Sensitivity, specificity, positive predictive value and negative predictive value, area under the receiving operating characteristic curve, and likelihood ratios (LRs) were calculated for various GDx VCC parameters. Seventy-four eyes (74 patients) with early glaucoma and 104 eyes (104 normal subjects) were enrolled. TSNIT Std Dev (temporal-superior-nasal-inferior-temporal standard deviation) had the best combination of sensitivity and specificity-61.3 and 95.2, respectively-followed by nerve fiber index score > 50 (sensitivity, 52.7%; specificity, 99%). Nerve fiber index score > 50 had positive and negative predictive values of 74.3% and 97.6%, respectively, for an assumed glaucoma prevalence of 5%. Nerve fiber index score > 50 had a positive LR (+LR) of 54.8 for early glaucoma. GDx VCC has moderate sensitivity, with high specificity, in the diagnosis of early glaucoma. The high +LR for the nerve fiber index score can provide valuable diagnostic information for individual patients.

  4. A masking level difference due to harmonicity.

    PubMed

    Treurniet, W C; Boucher, D R

    2001-01-01

    The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.

  5. Comparison of KRAS genotype: therascreen assay vs. LNA-mediated qPCR clamping assay.

    PubMed

    Chang, Shao-Chun; Denne, Jonathan; Zhao, Luping; Horak, Christine; Green, George; Khambata-Ford, Shirin; Bray, Christopher; Celik, Ilhan; Van Cutsem, Eric; Harbison, Christopher

    2013-09-01

    Kirsten rat sarcoma virus (KRAS) wild-type status determined using a locked nucleic acid (LNA)-mediated quantitative polymerase chain reaction (qPCR) clamping assay (LNA assay) predicted response to therapy in the CRYSTAL (Cetuximab Combined With Irinotecan in First-Line Therapy for Metastatic Colorectal Cancer) study. A companion KRAS diagnostic tool has been developed for routine clinical use (QIAGEN therascreen kit) (QIAGEN Manchester Ltd, Manchester, UK). We wanted to assess the concordance between the validated US Food and Drug Administration (FDA)-approved therascreen assay and the LNA assay in determining the KRAS status of a subset of patients enrolled in the CRYSTAL study. DNA extracted from paraffin-embedded tumor sections was tested for KRAS status using the therascreen assay. Efficacy data from the CRYSTAL study were assessed to determine if the overall survival (OS) hazard ratio for cetuximab in patients identified as having KRAS wild-type status using the therascreen assay was equivalent to that in patients identified as KRAS wild-type using the LNA assay. This was determined by assessing if the concordance between the therascreen assay and the LNA assay met the minimum threshold (prespecified as 0.8) to achieve a significant difference in the OS hazard ratio in favor of the cetuximab + FOLFIRI (5-fluorouracil, leucovorin [folinic acid], irinotecan) arm in the KRAS wild-type population as identified using the therascreen assay. Of the 148 samples determined to be KRAS wild-type (therascreen assay), 141 (95.3%) samples were also KRAS wild-type (LNA assay) and 7 samples (4.7%) were KRAS mutant (LNA assay). The prespecified primary concordance measure p was 141/148 = 0.953 (95% confidence interval [CI], 0.905-0.981). The concordance was statistically significantly higher than the prespecified threshold of 0.8 for concordance between the therascreen assay and the LNA assay. Consistent with the concordance exceeding the prespecified threshold, the OS hazard ratio (cetuximab + FOLFIRI arm vs. FOLFIRI arm) in the KRAS wild-type population, determined by the therascreen assay, supported a significant benefit for cetuximab (ie, the 95% CI excluded 1) and was comparable to the OS hazard ratio observed in the CRYSTAL study KRAS wild-type population (LNA assay) even after adjustment for potentially confounding baseline variables. These results support the utility of the therascreen assay for identifying patients who may benefit from cetuximab therapy for metastatic colorectal cancer. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Preoperative Magnetic Resonance Volumetry in Predicting Myometrial Invasion, Lymphovascular Space Invasion, and Tumor Grade: Is It Valuable in International Federation of Gynecology and Obstetrics Stage I Endometrial Cancer?

    PubMed

    Sahin, Hilal; Sarioglu, Fatma Ceren; Bagci, Mustafa; Karadeniz, Tugba; Uluer, Hatice; Sanci, Muzaffer

    2018-05-01

    The aim of this retrospective single-center study was to evaluate the relationship between maximum tumor size, tumor volume, tumor volume ratio (TVR) based on preoperative magnetic resonance (MR) volumetry, and negative histological prognostic parameters (deep myometrial invasion [MI], lymphovascular space invasion, tumor histological grade, and subtype) in International Federation of Gynecology and Obstetrics stage I endometrial cancer. Preoperative pelvic MR imaging studies of 68 women with surgical-pathologic diagnosis of International Federation of Gynecology and Obstetrics stage I endometrial cancer were reviewed for assessment of MR volumetry and qualitative assessment of MI. Volume of the tumor and uterus was measured with manual tracing of each section on sagittal T2-weighted images. Tumor volume ratio was calculated according to the following formula: TVR = (total tumor volume/total uterine volume) × 100. Receiver operating characteristics curve was performed to investigate a threshold for TVR associated with MI. The Mann-Whitney U test, Kruskal-Wallis test, and linear regression analysis were applied to evaluate possible differences between tumor size, tumor volume, TVR, and negative prognostic parameters. Receiver operating characteristics curve analysis of TVR for prediction of deep MI was statistically significant (P = 0.013). An optimal TVR threshold of 7.3% predicted deep myometrial invasion with 85.7% sensitivity, 46.8% specificity, 41.9% positive predictive value, and 88.0% negative predictive value. Receiver operating characteristics curve analyses of TVR, tumor size, and tumor volume for prediction of tumor histological grade or lymphovascular space invasion were not significant. The concordance between radiologic and pathologic assessment for MI was almost excellent (κ value, 0.799; P < 0.001). Addition of TVR to standard radiologic assessment of deep MI increased the sensitivity from 90.5% to 95.2%. Tumor volume ratio, based on preoperative MR volumetry, seems to predict deep MI independently in stage I endometrial cancer with insufficient sensitivity and specificity. Its value in clinical practice for risk stratification models in endometrial cancer has to be studied in larger cohort of patients.

  7. Singers' phonation threshold pressure and ratings of self-perceived effort on vocal tasks.

    PubMed

    McHenry, Monica; Evans, Joseph; Powitzky, Eric

    2013-05-01

    This study was designed to determine if singers' self-ratings of vocal effort could predict phonation threshold pressure (PTP). It was hypothesized that effort ratings on the more complex task of singing "Happy Birthday" would best predict PTP. A multiple regression analysis was performed with PTP as the predicted variable and self-ratings on four phonatory tasks as the predictor variables. Participants were 48 undergraduate and graduate students majoring in vocal performance. They produced /pi/ syllable trains as softly as possible for the measurement of PTP. They then rated their self-perceived vocal effort while softly producing the following: (1) sustained "ah" (comfortable, midrange pitch); (2) "ah" glide (chest to head voice); (3) Staccato "ah" in head voice (not falsetto); and (4) Happy Birthday in head voice (not falsetto). No ratings of vocal effort predicted PTP. The lack of correlation between PTP and ratings of Happy Birthday remained when separately evaluating graduate versus undergraduate students or males versus females. Informal evaluation of repeated ratings over time suggested the potential for effective self-monitoring. Students' ratings of self-perceived vocal effort were poor predictors of PTP. This may be because of the use of "effortless" imagery during singing instruction or consistent positive feedback regarding vocal performance. It is possible that self-rating could become an effective tool to predict vocal health if task elicitation instructions were more precise, and the student and voice teacher worked collaboratively to improve self-evaluation. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  8. Perceived pitch of vibrotactile stimuli: effects of vibration amplitude, and implications for vibration frequency coding.

    PubMed

    Morley, J W; Rowe, M J

    1990-12-01

    1. The effect of changes in amplitude on the perceived pitch of cutaneous vibratory stimuli was studied in psychophysical experiments designed to test whether the coding of information about the frequency of the vibration might be based on the ratio of recruitment of the PC (Pacinian corpuscle-associated) and RA (rapidly adapting) classes of tactile sensory fibres. The study was based on previous data which show that at certain vibration frequencies (e.g. 150 Hz) the ratio of recruitment of the PC and RA classes should vary as a function of vibration amplitude. 2. Sinusoidal vibration at either 30 Hz or 150 Hz, and at an amplitude 10 dB above subjective detection thresholds was delivered in a 1 s train to the distal phalangeal pad of the index finger in eight human subjects. This standard vibration was followed after 0.5 s by a 1 s comparison train of vibration which (unknown to the subject) was at the same frequency as the standard but at a range of amplitudes from 2 to 50 dB above the detection threshold. A two-alternative forced-choice procedure was used in which the subject had to indicate whether the comparison stimulus was higher or lower in pitch (frequency) than the standard. 3. Marked differences were seen from subject to subject in the effect of amplitude on perceived pitch at both 30 Hz and 150 Hz. At 150 Hz, five out of the eight subjects reported an increase in pitch as the amplitude of the comparison vibration increased, one experienced no change, and only two experienced the fall in perceived pitch that is predicted if the proposed ratio code contributes to vibrotactile pitch judgements. At 30 Hz similar intersubject variability was seen in the pitch-amplitude functions. 4. The results do not support the hypothesis that a ratio code contributes to vibrotactile pitch perception. We conclude that temporal patterning of impulse activity remains the major candidate code for pitch perception, at least over a substantial part of the vibrotactile frequency bandwidth.

  9. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  10. Electrically evoked compound action potential amplitude growth functions and HiResolution programming levels in pediatric CII implant subjects.

    PubMed

    Eisen, Marc D; Franck, Kevin H

    2004-12-01

    To characterize the amplitude growth functions of the electrically evoked compound action potential (ECAP) in pediatric subjects implanted with the Clarion HiFocus electrode array with respect to electrode position and the presence or absence of a Silastic positioner. Electrophysiologic growth function data are compared with HiResolution (HiRes) psychophysical programming levels. ECAP growth functions were measured for all electrodes along the implant's array in 16 pediatric subjects. Nine of the patients were implanted with a Silastic positioner, whereas seven had no positioner. ECAP thresholds and growth function slopes were calculated. Fifteen of the 16 patients had psychophysical threshold and maximum comfort levels available. Programming levels and ECAP thresholds were compared within and among the subjects. ECAP thresholds showed variability among patients, ranging from 178 to 920 nA at 32 musec pulse width. ECAP thresholds did not depend on electrode position along the cochlea but were lower in the presence of the Silastic positioner (p < 0.001). Thresholds determined with the masker-probe versus the alternating polarity paradigms revealed moderate (r = 0.76) correlation. Growth function slopes also showed considerable variation among patients. Unlike thresholds, slopes decreased from apical to basal cochlear locations (p < 0.001) but showed no difference between the absence and presence of the positioner. Programming levels in HiRes were correlated with ECAP threshold levels. When ECAP thresholds were adjusted for each patient by the difference between M level and ECAP threshold at electrode 9, however, overall correlation between the two measurements was excellent (r = 0.98, N = 224). In pediatric subjects with the Clarion HiFocus electrode, ECAP growth function thresholds appear to decrease with the presence of the Silastic positioner but are unaffected by electrode position along the array. Growth function slope, however, depends on electrode position along the array but not on the presence of the positioner. ECAP programming levels can reliably predict stimulus intensities within the patients' dynamic ranges, but considerable variability is seen between ECAP thresholds and HiRes programming levels.

  11. Implications of Nine Risk Prediction Models for Selecting Ever-Smokers for Computed Tomography Lung Cancer Screening.

    PubMed

    Katki, Hormuzd A; Kovalchik, Stephanie A; Petito, Lucia C; Cheung, Li C; Jacobs, Eric; Jemal, Ahmedin; Berg, Christine D; Chaturvedi, Anil K

    2018-05-15

    Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Population-based prospective studies. United States. Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health-AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Model calibration (ratio of model-predicted to observed cases [expected-observed ratio]) and discrimination (area under the curve [AUC]). At a 5-year risk threshold of 2.0%, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected-observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected-observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73% of persons chosen. No consensus on risk thresholds for screening. The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening. Intramural Research Program of the National Institutes of Health/National Cancer Institute.

  12. Biodiversity response to natural gradients of multiple stressors on continental margins

    PubMed Central

    Sperling, Erik A.; Frieder, Christina A.; Levin, Lisa A.

    2016-01-01

    Sharp increases in atmospheric CO2 are resulting in ocean warming, acidification and deoxygenation that threaten marine organisms on continental margins and their ecological functions and resulting ecosystem services. The relative influence of these stressors on biodiversity remains unclear, as well as the threshold levels for change and when secondary stressors become important. One strategy to interpret adaptation potential and predict future faunal change is to examine ecological shifts along natural gradients in the modern ocean. Here, we assess the explanatory power of temperature, oxygen and the carbonate system for macrofaunal diversity and evenness along continental upwelling margins using variance partitioning techniques. Oxygen levels have the strongest explanatory capacity for variation in species diversity. Sharp drops in diversity are seen as O2 levels decline through the 0.5–0.15 ml l−1 (approx. 22–6 µM; approx. 21–5 matm) range, and as temperature increases through the 7–10°C range. pCO2 is the best explanatory variable in the Arabian Sea, but explains little of the variance in diversity in the eastern Pacific Ocean. By contrast, very little variation in evenness is explained by these three global change variables. The identification of sharp thresholds in ecological response are used here to predict areas of the seafloor where diversity is most at risk to future marine global change, noting that the existence of clear regional differences cautions against applying global thresholds. PMID:27122565

  13. Intercenter Differences in Bronchopulmonary Dysplasia or Death Among Very Low Birth Weight Infants

    PubMed Central

    Walsh, Michele; Bobashev, Georgiy; Das, Abhik; Levine, Burton; Carlo, Waldemar A.; Higgins, Rosemary D.

    2011-01-01

    OBJECTIVES: To determine (1) the magnitude of clustering of bronchopulmonary dysplasia (36 weeks) or death (the outcome) across centers of the Eunice Kennedy Shriver National Institute of Child and Human Development National Research Network, (2) the infant-level variables associated with the outcome and estimate their clustering, and (3) the center-specific practices associated with the differences and build predictive models. METHODS: Data on neonates with a birth weight of <1250 g from the cluster-randomized benchmarking trial were used to determine the magnitude of clustering of the outcome according to alternating logistic regression by using pairwise odds ratio and predictive modeling. Clinical variables associated with the outcome were identified by using multivariate analysis. The magnitude of clustering was then evaluated after correction for infant-level variables. Predictive models were developed by using center-specific and infant-level variables for data from 2001 2004 and projected to 2006. RESULTS: In 2001–2004, clustering of bronchopulmonary dysplasia/death was significant (pairwise odds ratio: 1.3; P < .001) and increased in 2006 (pairwise odds ratio: 1.6; overall incidence: 52%; range across centers: 32%–74%); center rates were relatively stable over time. Variables that varied according to center and were associated with increased risk of outcome included lower body temperature at NICU admission, use of prophylactic indomethacin, specific drug therapy on day 1, and lack of endotracheal intubation. Center differences remained significant even after correction for clustered variables. CONCLUSION: Bronchopulmonary dysplasia/death rates demonstrated moderate clustering according to center. Clinical variables associated with the outcome were also clustered. Center differences after correction of clustered variables indicate presence of as-yet unmeasured center variables. PMID:21149431

  14. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Threshold for the destabilisation of the ion-temperature-gradient mode in magnetically confined toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Zocco, A.; Xanthopoulos, P.; Doerk, H.; Connor, J. W.; Helander, P.

    2018-02-01

    The threshold for the resonant destabilisation of ion-temperature-gradient (ITG) driven instabilities that render the modes ubiquitous in both tokamaks and stellarators is investigated. We discover remarkably similar results for both confinement concepts if care is taken in the analysis of the effect of the global shear . We revisit, analytically and by means of gyrokinetic simulations, accepted tokamak results and discover inadequacies of some aspects of their theoretical interpretation. In particular, for standard tokamak configurations, we find that global shear effects on the critical gradient cannot be attributed to the wave-particle resonance destabilising mechanism of Hahm & Tang (Phys. Plasmas, vol. 1, 1989, pp. 1185-1192), but are consistent with a stabilising contribution predicted by Biglari et al. (Phys. Plasmas, vol. 1, 1989, pp. 109-118). Extensive analytical and numerical investigations show that virtually no previous tokamak theoretical predictions capture the temperature dependence of the mode frequency at marginality, thus leading to incorrect instability thresholds. In the asymptotic limit , where is the rotational transform, and such a threshold should be solely determined by the resonant toroidal branch of the ITG mode, we discover a family of unstable solutions below the previously known threshold of instability. This is true for a tokamak case described by a local local equilibrium, and for the stellarator Wendelstein 7-X, where these unstable solutions are present even for configurations with a small trapped-particle population. We conjecture they are of the Floquet type and derive their properties from the Fourier analysis of toroidal drift modes of Connor & Taylor (Phys. Fluids, vol. 30, 1987, pp. 3180-3185), and to Hill's theory of the motion of the lunar perigee (Acta Math., vol. 8, 1886, pp. 1-36). The temperature dependence of the newly determined threshold is given for both confinement concepts. In the first case, the new temperature-gradient threshold is found to be rather insensitive to the temperature ratio i/Te$ , at least for i/Te\\lesssim 1$ , and to be a growing function of the density gradient scale for i/Te\\gtrsim 1$ . For Wendelstein 7-X, the new critical temperature gradient is a growing function of the temperature ratio. The importance of these findings for the assessment of turbulence in stellarators and low-shear tokamak configurations is discussed.

  16. The role of NT-proBNP in explaining the variance in anaerobic threshold and VE/VCO(2) slope.

    PubMed

    Athanasopoulos, Leonidas V; Dritsas, Athanasios; Doll, Helen A; Cokkinos, Dennis V

    2011-01-01

    We investigated whether anaerobic threshold (AT) and ventilatory efficiency (minute ventilation/carbon dioxide production slope, VE/VCO2 slope), both significantly associated with mortality, can be predicted by questionnaire scores and/or other laboratory measurements. Anaerobic threshold and VE/VCO(2) slope, plasma N-terminal pro-brain natriuretic peptide (NT-proBNP), and the echocardiographic markers left ventricular ejection fraction (LVEF) and left atrial (LA) diameter were measured in 62 patients with heart failure (HF), who also completed the Minnesota Living with Heart Failure Questionnaire (MLHF), and the Specific Activity Questionnaire (SAQ). Linear regression models, adjusting for age and gender, were fitted. While the etiology of HF, SAQ score, MLHF score, LVEF, LA diameter, and logNT-proBNP were each significantly predictive of both AT and VE/VCO2 slope on stepwise multiple linear regression, only SAQ score (P < .001) and logNT-proBNP (P = .001) were significantly predictive of AT, explaining 56% of the variability (adjusted R(2) = 0.525), while logNT-proBNP (P < .001) and etiology of HF (P = .003) were significantly predictive of VE/VCO(2) slope, explaining 49% of the variability (adjusted R(2) = 0.45). The area under the ROC curve for NT-proBNP to identify patients with a VE/VCO(2) slope greater than 34 and AT less than 11 mL · kg(-1) · min(-1) was 0.797; P < .001 and 0.712; P = .044, respectively. A plasma concentration greater than 429.5 pg/mL (sensitivity: 78%; specificity: 70%) and greater than 674.5 pg/mL (sensitivity: 77.8%; specificity: 65%) identified a VE/VCO(2) slope greater than 34 and AT lower than 11 mL · kg(-1) · min(-1), respectively. NT-proBNP is independently related to both AT and VE/VCO(2) slope. Specific Activity Questionnaire score is independently related only to AT and the etiology of HF only to VE/VCO(2) slope.

  17. When is Chemical Similarity Significant? The Statistical Distribution of Chemical Similarity Scores and Its Extreme Values

    PubMed Central

    Baldi, Pierre

    2010-01-01

    As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577

  18. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  19. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali

    PubMed Central

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-01-01

    Background The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models. Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. Methods A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data. The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia. Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. Results The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]). The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Conclusion Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation. PMID:19361335

  20. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali.

    PubMed

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A Lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-04-10

    The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models.Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data.The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia.Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]).The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation.

  1. Gravity wave control on ESF day-to-day variability: An empirical approach

    NASA Astrophysics Data System (ADS)

    Aswathy, R. P.; Manju, G.

    2017-06-01

    The gravity wave control on the daily variation in nighttime ionization irregularity occurrence is studied using ionosonde data for the period 2002-2007 at magnetic equatorial location Trivandrum. Recent studies during low solar activity period have revealed that the seed perturbations should have the threshold amplitude required to trigger equatorial spread F (ESF), at a particular altitude and that this threshold amplitude undergoes seasonal and solar cycle changes. In the present study, the altitude variation of the threshold seed perturbations is examined for autumnal equinox of different years. Thereafter, a unique empirical model, incorporating the electrodynamical effects and the gravity wave modulation, is developed. Using the model the threshold curve for autumnal equinox season of any year may be delineated if the solar flux index (F10.7) is known. The empirical model is validated using the data for high, moderate, and low solar epochs in 2001, 2004, and 1995, respectively. This model has the potential to be developed further, to forecast ESF incidence, if the base height of ionosphere is in the altitude region where electrodynamics controls the occurrence of ESF. ESF irregularities are harmful for communication and navigation systems, and therefore, research is ongoing globally to predict them. In this context, this study is crucial for evolving a methodology to predict communication as well as navigation outages.Plain Language SummaryThe manifestation of nocturnal ionospheric irregularities at magnetic equatorial regions poses a major hazard for communication and navigation systems. It is therefore essential to arrive at prediction methodologies for these irregularities. The present study puts forth a novel empirical model which, using only solar flux index, successfully differentiates between days with and without nocturnal ionization irregularity occurrence. The model-derived curve is obtained such that the days with and without occurrence of irregularities lie below and above the curve. The model is validated with data from the years 2001 (high solar activity), 2004 (moderate solar activity), and 1995 (low solar activity) which have not been used in the model development. Presently, the model is developed for autumnal equinox season, but the model development will be undertaken for other seasons also in a future work so that the seasonal variability is also incorporated. This model thus holds the potential to be developed into a full-fledged model which can predict occurrence of nocturnal ionospheric irregularities. Globally, concerted efforts are underway to predict these ionospheric irregularities. Hence, this study is extremely important from the point of view of predicting communication and navigation outages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.555a2107W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.555a2107W"><span>Fatigue life prediction of rotor blade composites: Validation of constant amplitude formulations with variable amplitude experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Westphal, T.; Nijssen, R. P. L.</p> <p>2014-12-01</p> <p>The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5344177','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5344177"><span>Prediction of Preeclampsia Using the Soluble fms-Like Tyrosine Kinase 1 to Placental Growth Factor Ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gaccioli, Francesca; Cook, Emma; Hund, Martin; Charnock-Jones, D. Stephen; Smith, Gordon C.S.</p> <p>2017-01-01</p> <p>We sought to assess the ratio of sFlt-1 (soluble fms-like tyrosine kinase 1) to PlGF (placental growth factor) in maternal serum as a screening test for preeclampsia in unselected nulliparous women with a singleton pregnancy. We studied 4099 women recruited to the POP study (Pregnancy Outcome Prediction) (Cambridge, United Kingdom). The sFlt-1:PlGF ratio was measured using the Roche Cobas e411 platform at ≈20, ≈28, and ≈36 weeks of gestational age (wkGA). Screen positive was defined as an sFlt-1:PlGF ratio >38, but higher thresholds were also studied. At 28 wkGA, an sFlt-1:PlGF ratio >38 had a positive predictive value (PPV) of 32% for preeclampsia and preterm birth, and the PPV was similar comparing women with low and high prior risk of disease. At 36 wkGA, an sFlt-1:PlGF ratio >38 had a PPV for severe preeclampsia of 20% in high-risk women and 6.4% in low-risk women. At 36 wkGA, an sFlt-1:PlGF ratio >110 had a PPV of 30% for severe preeclampsia, and the PPV was similar comparing low- and high-risk women. Overall, at 36 wkGA, 195 (5.2%) women either had an sFlt-1:PlGF ratio of >110 or an sFlt-1:PlGF ratio >38 plus maternal risk factors: 43% of these women developed preeclampsia, about half with severe features. Among low-risk women at 36 wkGA, an sFlt-1:PlGF ratio ≤38 had a negative predictive value for severe preeclampsia of 99.2%. The sFlt-1:PlGF ratio provided clinically useful prediction of the risk of the most important manifestations of preeclampsia in a cohort of unselected nulliparous women. PMID:28167687</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Chaos..27l6902K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Chaos..27l6902K"><span>Ocean eddies and climate predictability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kirtman, Ben P.; Perlin, Natalie; Siqueira, Leo</p> <p>2017-12-01</p> <p>A suite of coupled climate model simulations and experiments are used to examine how resolved mesoscale ocean features affect aspects of climate variability, air-sea interactions, and predictability. In combination with control simulations, experiments with the interactive ensemble coupling strategy are used to further amplify the role of the oceanic mesoscale field and the associated air-sea feedbacks and predictability. The basic intent of the interactive ensemble coupling strategy is to reduce the atmospheric noise at the air-sea interface, allowing an assessment of how noise affects the variability, and in this case, it is also used to diagnose predictability from the perspective of signal-to-noise ratios. The climate variability is assessed from the perspective of sea surface temperature (SST) variance ratios, and it is shown that, unsurprisingly, mesoscale variability significantly increases SST variance. Perhaps surprising is the fact that the presence of mesoscale ocean features even further enhances the SST variance in the interactive ensemble simulation beyond what would be expected from simple linear arguments. Changes in the air-sea coupling between simulations are assessed using pointwise convective rainfall-SST and convective rainfall-SST tendency correlations and again emphasize how the oceanic mesoscale alters the local association between convective rainfall and SST. Understanding the possible relationships between the SST-forced signal and the weather noise is critically important in climate predictability. We use the interactive ensemble simulations to diagnose this relationship, and we find that the presence of mesoscale ocean features significantly enhances this link particularly in ocean eddy rich regions. Finally, we use signal-to-noise ratios to show that the ocean mesoscale activity increases model estimated predictability in terms of convective precipitation and atmospheric upper tropospheric circulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29289056','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29289056"><span>Ocean eddies and climate predictability.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kirtman, Ben P; Perlin, Natalie; Siqueira, Leo</p> <p>2017-12-01</p> <p>A suite of coupled climate model simulations and experiments are used to examine how resolved mesoscale ocean features affect aspects of climate variability, air-sea interactions, and predictability. In combination with control simulations, experiments with the interactive ensemble coupling strategy are used to further amplify the role of the oceanic mesoscale field and the associated air-sea feedbacks and predictability. The basic intent of the interactive ensemble coupling strategy is to reduce the atmospheric noise at the air-sea interface, allowing an assessment of how noise affects the variability, and in this case, it is also used to diagnose predictability from the perspective of signal-to-noise ratios. The climate variability is assessed from the perspective of sea surface temperature (SST) variance ratios, and it is shown that, unsurprisingly, mesoscale variability significantly increases SST variance. Perhaps surprising is the fact that the presence of mesoscale ocean features even further enhances the SST variance in the interactive ensemble simulation beyond what would be expected from simple linear arguments. Changes in the air-sea coupling between simulations are assessed using pointwise convective rainfall-SST and convective rainfall-SST tendency correlations and again emphasize how the oceanic mesoscale alters the local association between convective rainfall and SST. Understanding the possible relationships between the SST-forced signal and the weather noise is critically important in climate predictability. We use the interactive ensemble simulations to diagnose this relationship, and we find that the presence of mesoscale ocean features significantly enhances this link particularly in ocean eddy rich regions. Finally, we use signal-to-noise ratios to show that the ocean mesoscale activity increases model estimated predictability in terms of convective precipitation and atmospheric upper tropospheric circulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.H23N1072S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.H23N1072S"><span>Short-term Drought Prediction in India.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shah, R.; Mishra, V.</p> <p>2014-12-01</p> <p>Medium range soil moisture drought forecast helps in decision making in the field of agriculture and water resources management. Part of skills in medium range drought forecast comes from precipitation. Proper evaluation and correction of precipitation forecast may improve drought predictions. Here, we evaluate skills of ensemble mean precipitation forecast from Global Ensemble Forecast System (GEFS) for medium range drought predictions over India. Climatological mean (CLIM) of historic data (OBS) are used as reference forecast to evaluate GEFS precipitation forecast. Analysis was conducted based on forecast initiated on 1st and 15th dates of each month for lead up to 7-days. Correlation and RMSE were used to estimate skill scores of accumulated GEFS precipitation forecast from lead 1 to 7-days. Volumetric indices based on the 2X2 contingency table were used to check missed and falsely predicted historic volume of daily precipitation from GEFS in different regions and at different thresholds. GEFS showed improvement in correlation of 0.44 over CLIM during the monsoon season and 0.55 during the winter season. Lower RMSE was showed by GEFS than CLIM. Ratio of RMSE in GEFS and CLIM comes out as 0.82 and 0.4 (perfect skill is at zero) during the monsoon and winter season, respectively. We finally used corrected GEFS forecast to derive the Variable Infiltration Capacity (VIC) model, which was used to develop short-term forecast of hydrologic and agricultural (soil moisture) droughts in India.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080023023&hterms=wave+rotor&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dwave%2Brotor','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080023023&hterms=wave+rotor&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dwave%2Brotor"><span>A Stability Analysis for a Hydrodynamic Three-Wave Journal Bearing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ene, Nicoleta M.; Dimofte, Florin; Keith, Theo G., Jr.</p> <p>2007-01-01</p> <p>The influence of the wave amplitude and oil supply pressure on the dynamic behavior of a hydrodynamic three-wave journal bearing is presented. Both, a transient and a small perturbation technique, were used to predict the threshold to fractional frequency whirl (FFW). In addition, the behavior of the rotor after FFW appeared was determined from the transient analysis. The turbulent effects were also included in the computations. Bearings having a diameter of 30 mm, a length of 27.5 mm, and a clearance of 35 microns were analyzed. Numerical results were compared to experimental results obtained at the NASA GRC. Numerical and experimental results showed that the above-mentioned wave bearing with a wave amplitude ratio of 0.305 operates stably at rotational speeds up to 60,000 rpm, regardless of the oil supply pressure. For smaller wave amplitude ratios, a threshold of stability was found. It was observed that the threshold of stability for lower wave amplitude strongly depends on the oil supply pressure and on the wave amplitude. When the FFW occurs, the journal center maintains its trajectory inside the bearing clearance and therefore the rotor can be run safely without damaging the bearing surfaces.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15140641','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15140641"><span>From innervation density to tactile acuity: 1. Spatial representation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brown, Paul B; Koerber, H Richard; Millecchia, Ronald</p> <p>2004-06-11</p> <p>We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25260695','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25260695"><span>Predicting the effect of cytochrome P450 inhibitors on substrate drugs: analysis of physiologically based pharmacokinetic modeling submissions to the US Food and Drug Administration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wagner, Christian; Pan, Yuzhuo; Hsu, Vicky; Grillo, Joseph A; Zhang, Lei; Reynolds, Kellie S; Sinha, Vikram; Zhao, Ping</p> <p>2015-01-01</p> <p>The US Food and Drug Administration (FDA) has seen a recent increase in the application of physiologically based pharmacokinetic (PBPK) modeling towards assessing the potential of drug-drug interactions (DDI) in clinically relevant scenarios. To continue our assessment of such approaches, we evaluated the predictive performance of PBPK modeling in predicting cytochrome P450 (CYP)-mediated DDI. This evaluation was based on 15 substrate PBPK models submitted by nine sponsors between 2009 and 2013. For these 15 models, a total of 26 DDI studies (cases) with various CYP inhibitors were available. Sponsors developed the PBPK models, reportedly without considering clinical DDI data. Inhibitor models were either developed by sponsors or provided by PBPK software developers and applied with minimal or no modification. The metric for assessing predictive performance of the sponsors' PBPK approach was the R predicted/observed value (R predicted/observed = [predicted mean exposure ratio]/[observed mean exposure ratio], with the exposure ratio defined as [C max (maximum plasma concentration) or AUC (area under the plasma concentration-time curve) in the presence of CYP inhibition]/[C max or AUC in the absence of CYP inhibition]). In 81 % (21/26) and 77 % (20/26) of cases, respectively, the R predicted/observed values for AUC and C max ratios were within a pre-defined threshold of 1.25-fold of the observed data. For all cases, the R predicted/observed values for AUC and C max were within a 2-fold range. These results suggest that, based on the submissions to the FDA to date, there is a high degree of concordance between PBPK-predicted and observed effects of CYP inhibition, especially CYP3A-based, on the exposure of drug substrates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4213550','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4213550"><span>Non-traditional Serum Lipid Variables and Recurrent Stroke Risk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Park, Jong-Ho; Lee, Juneyoung; Ovbiagele, Bruce</p> <p>2014-01-01</p> <p>Background and Purpose Expert consensus guidelines recommend low-density lipoprotein cholesterol (LDL-C) as the primary serum lipid target for recurrent stroke risk reduction. However, mounting evidence suggests that other lipid parameters might be additional therapeutic targets or at least also predict cardiovascular risk. Little is known about the effects of non-traditional lipid variables on recurrent stroke risk. Methods We analyzed the Vitamin Intervention for Stroke Prevention study database comprising 3680 recent (<120 days) ischemic stroke patients followed up for 2 years. Independent associations of baseline serum lipid variables with recurrent ischemic stroke (primary outcome) and the composite endpoint of ischemic stroke/coronary heart disease (CHD)/vascular death (secondary outcomes) were assessed. Results Of all variables evaluated, only triglycerides (TG)/high-density lipoprotein cholesterol (HDL-C) ratio was consistently and independently related to both outcomes: compared with the lowest quintile, the highest TG/HDL-C ratio quintile was associated with stroke (adjusted hazard ratio, 1.56; 95% CI, 1.05−2.32) and stroke/CHD/vascular death (1.39; 1.05−1.83), including adjustment for lipid modifier use. Compared with the lowest quintile, the highest total cholesterol/HDL-C ratio quintile was associated with stroke/CHD/vascular death (1.45; 1.03−2.03). LDL-C/HDL-C ratio, non-HDL-C, elevated TG alone, and low HDL-C alone were not independently linked to either outcome. Conclusions Of various non-traditional lipid variables, elevated baseline TG/HDL-C and TC/HDL-C ratios predict future vascular risk after a stroke, but only elevated TG/HDL-C ratio is related to risk of recurrent stroke. Future studies should assess the role of TG/HDL as a potential therapeutic target for global vascular risk reduction after stroke. PMID:25236873</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3631261','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3631261"><span>Recognition of speech in noise after application of time-frequency masks: Dependence on frequency and threshold parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sinex, Donal G.</p> <p>2013-01-01</p> <p>Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4573574','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4573574"><span>Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.</p> <p>2015-01-01</p> <p>Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdWR..111...70S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdWR..111...70S"><span>Numerical study of the effects of contact angle and viscosity ratio on the dynamics of snap-off through porous media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Starnoni, Michele; Pokrajac, Dubravka</p> <p>2018-01-01</p> <p>Snap-off is a pore-scale mechanism occurring in porous media in which a bubble of non-wetting phase displacing a wetting phase, and vice-versa, can break-up into ganglia when passing through a constriction. This mechanism is very important in foam generation processes, enhanced oil recovery techniques and capillary trapping of CO2 during its geological storage. In the present study, the effects of contact angle and viscosity ratio on the dynamics of snap-off are examined by simulating drainage in a single pore-throat constriction of variable cross-section, and for different pore-throat geometries. To model the flow, we developed a CFD code based on the Finite Volume method. The Volume-of-fluid method is used to track the interfaces. Results show that the threshold contact angle for snap-off, i.e. snap-off occurs only for contact angles smaller than the threshold, increases from a value of 28° for a circular cross-section to 30-34° for a square cross-section and up to 40° for a triangular one. For a throat of square cross-section, increasing the viscosity of the injected phase results in a drop in the threshold contact angle from a value of 30° when the viscosity ratio μ bar is equal to 1 to 26° when μ bar = 20 and down to 24° when μ bar = 20 .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4665739','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4665739"><span>Clinical Characteristics and Outcomes Are Similar in ARDS Diagnosed by Oxygen Saturation/Fio2 Ratio Compared With Pao2/Fio2 Ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Janz, David R.; Shaver, Ciara M.; Bernard, Gordon R.; Bastarache, Julie A.; Ware, Lorraine B.</p> <p>2015-01-01</p> <p>BACKGROUND: Oxygen saturation as measured by pulse oximetry/Fio2 (SF) ratio is highly correlated with the Pao2/Fio2 (PF) ratio in patients with ARDS. However, it remains uncertain whether SF ratio can be substituted for PF ratio for diagnosis of ARDS and whether SF ratio might identify patients who are systemically different from patients diagnosed by PF ratio. METHODS: We conducted a secondary analysis of a large observational prospective cohort study. Patients were eligible if they were admitted to the medical ICU and fulfilled the Berlin definition of ARDS with hypoxemia criteria using either the standard PF threshold (PF ratio ≤ 300) or a previously published SF threshold (SF ratio ≤ 315). RESULTS: Of 362 patients with ARDS, 238 (66%) received a diagnosis by PF ratio and 124 (34%) by SF ratio. In a small group of patients who received diagnoses of ARDS by SF ratio who had arterial blood gas measurements on the same day (n = 10), the PF ratio did not meet ARDS criteria. There were no major differences in clinical characteristics or comorbidities between groups with the exception of APACHE (Acute Physiology and Chronic Health Evaluation) II scores, which were higher in the group diagnosed by PF ratio. However, this difference was no longer apparent when arterial blood gas-dependent variables (pH, Pao2) were removed from the APACHE II score. There were also no differences in clinical outcomes including duration of mechanical ventilation (mean, 7 days in both groups; P = .25), duration of ICU stay (mean, 10 days vs 9 days in PF ratio vs SF ratio; P = .26), or hospital mortality (36% in both groups, P = .9). CONCLUSIONS: Patients with ARDS diagnosed by SF ratio have very similar clinical characteristics and outcomes compared with patients diagnosed by PF ratio. These findings suggest that SF ratio could be considered as a diagnostic tool for early enrollment into clinical trials. PMID:26271028</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070011734&hterms=construction&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dconstruction','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070011734&hterms=construction&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dconstruction"><span>Construction of Protograph LDPC Codes with Linear Minimum Distance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Divsalar, Dariush; Dolinar, Sam; Jones, Christopher</p> <p>2006-01-01</p> <p>A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8677E..05M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8677E..05M"><span>Modeling of cw OIL energy performance based on similarity criteria</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.</p> <p>2012-01-01</p> <p>A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7175E..1FS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7175E..1FS"><span>Finite element model of thermal processes in retinal photocoagulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel</p> <p>2009-02-01</p> <p>Short duration (< 20 ms) pulses are desirable in patterned scanning laser photocoagulation to confine thermal damage to the photoreceptor layer, decrease overall treatment time and reduce pain. However, short exposures have a smaller therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation). We have constructed a finite-element computational model of retinal photocoagulation to predict spatial damage and improve the therapeutic window. Model parameters were inferred from experimentally measured absorption characteristics of ocular tissues, as well as the thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Calculated lesion diameters showed good agreement with histological measurements over a wide range of pulse durations and powers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22496279','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22496279"><span>Audiogram and auditory critical ratios of two Florida manatees (Trichechus manatus latirostris).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gaspard, Joseph C; Bauer, Gordon B; Reep, Roger L; Dziuk, Kimberly; Cardwell, Adrienne; Read, Latoshia; Mann, David A</p> <p>2012-05-01</p> <p>Manatees inhabit turbid, shallow-water environments and have been shown to have poor visual acuity. Previous studies on hearing have demonstrated that manatees possess good hearing and sound localization abilities. The goals of this research were to determine the hearing abilities of two captive subjects and measure critical ratios to understand the capacity of manatees to detect tonal signals, such as manatee vocalizations, in the presence of noise. This study was also undertaken to better understand individual variability, which has been encountered during behavioral research with manatees. Two Florida manatees (Trichechus manatus latirostris) were tested in a go/no-go paradigm using a modified staircase method, with incorporated 'catch' trials at a 1:1 ratio, to assess their ability to detect single-frequency tonal stimuli. The behavioral audiograms indicated that the manatees' auditory frequency detection for tonal stimuli ranged from 0.25 to 90.5 kHz, with peak sensitivity extending from 8 to 32 kHz. Critical ratios, thresholds for tone detection in the presence of background masking noise, were determined with one-octave wide noise bands, 7-12 dB (spectrum level) above the thresholds determined for the audiogram under quiet conditions. Manatees appear to have quite low critical ratios, especially at 8 kHz, where the ratio was 18.3 dB for one manatee. This suggests that manatee hearing is sensitive in the presence of background noise and that they may have relatively narrow filters in the tested frequency range.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5172530','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5172530"><span>Putting Temperature and Oxygen Thresholds of Marine Animals in Context of Environmental Change: A Regional Perspective for the Scotian Shelf and Gulf of St. Lawrence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>We conducted a literature review of reported temperature, salinity, pH, depth and oxygen preferences and thresholds of important marine species found in the Gulf of St. Lawrence and Scotian Shelf region. We classified 54 identified fishes and macroinvertebrates as important either because they support a commercial fishery, have threatened or at risk status, or meet one of the following criteria: bycatch, baitfish, invasive, vagrant, important for ecosystem energy transfer, or predators or prey of the above species. The compiled data allow an assessment of species-level impacts including physiological stress and mortality given predictions of future ocean physical and biogeochemical conditions. If an observed, multi-decadal oxygen trend on the central Scotian Shelf continues, a number of species will lose favorable oxygen conditions, experience oxygen-stress, or disappear due to insufficient oxygen in the coming half-century. Projected regional trends and natural variability are both large, and natural variability will act to alternately amplify and dampen anthropogenic changes. When estimates of variability are included with the trend, species encounter unfavourable oxygen conditions decades sooner. Finally, temperature and oxygen thresholds of adult Atlantic wolffish (Anarhichas lupus) and adult Atlantic cod (Gadus morhua) are assessed in the context of a potential future scenario derived from high-resolution ocean models for the central Scotian Shelf. PMID:27997536</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27997536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27997536"><span>Putting Temperature and Oxygen Thresholds of Marine Animals in Context of Environmental Change: A Regional Perspective for the Scotian Shelf and Gulf of St. Lawrence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brennan, Catherine E; Blanchard, Hannah; Fennel, Katja</p> <p>2016-01-01</p> <p>We conducted a literature review of reported temperature, salinity, pH, depth and oxygen preferences and thresholds of important marine species found in the Gulf of St. Lawrence and Scotian Shelf region. We classified 54 identified fishes and macroinvertebrates as important either because they support a commercial fishery, have threatened or at risk status, or meet one of the following criteria: bycatch, baitfish, invasive, vagrant, important for ecosystem energy transfer, or predators or prey of the above species. The compiled data allow an assessment of species-level impacts including physiological stress and mortality given predictions of future ocean physical and biogeochemical conditions. If an observed, multi-decadal oxygen trend on the central Scotian Shelf continues, a number of species will lose favorable oxygen conditions, experience oxygen-stress, or disappear due to insufficient oxygen in the coming half-century. Projected regional trends and natural variability are both large, and natural variability will act to alternately amplify and dampen anthropogenic changes. When estimates of variability are included with the trend, species encounter unfavourable oxygen conditions decades sooner. Finally, temperature and oxygen thresholds of adult Atlantic wolffish (Anarhichas lupus) and adult Atlantic cod (Gadus morhua) are assessed in the context of a potential future scenario derived from high-resolution ocean models for the central Scotian Shelf.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28605719','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28605719"><span>Probabilistic forecasting for extreme NO2 pollution episodes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aznarte, José L</p> <p>2017-10-01</p> <p>In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24829327','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24829327"><span>The limits of applicability of the sound exposure level (SEL) metric to temporal threshold shifts (TTS) in beluga whales, Delphinapterus leucas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Popov, Vladimir V; Supin, Alexander Ya; Rozhnov, Viatcheslav V; Nechaev, Dmitry I; Sysueva, Evgenia V</p> <p>2014-05-15</p> <p>The influence of fatiguing sound level and duration on post-exposure temporary threshold shift (TTS) was investigated in two beluga whales (Delphinapterus leucas). The fatiguing sound was half-octave noise with a center frequency of 22.5 kHz. TTS was measured at a test frequency of 32 kHz. Thresholds were measured by recording rhythmic evoked potentials (the envelope following response) to a test series of short (eight cycles) tone pips with a pip rate of 1000 s(-1). TTS increased approximately proportionally to the dB measure of both sound pressure (sound pressure level, SPL) and duration of the fatiguing noise, as a product of these two variables. In particular, when the noise parameters varied in a manner that maintained the product of squared sound pressure and time (sound exposure level, SEL, which is equivalent to the overall noise energy) at a constant level, TTS was not constant. Keeping SEL constant, the highest TTS appeared at an intermediate ratio of SPL to sound duration and decreased at both higher and lower ratios. Multiplication (SPL multiplied by log duration) better described the experimental data than an equal-energy (equal SEL) model. The use of SEL as a sole universal metric may result in an implausible assessment of the impact of a fatiguing sound on hearing thresholds in odontocetes, including under-evaluation of potential risks. © 2014. Published by The Company of Biologists Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890057944&hterms=ply&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3D.ply','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890057944&hterms=ply&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3D.ply"><span>Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Harris, Charles E.; Allen, David H.; Nottorf, Eric W.</p> <p>1989-01-01</p> <p>A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29195421','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29195421"><span>Noise interference with echo delay discrimination in bat biosonar.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simmons, J A</p> <p>2017-11-01</p> <p>Echolocating big brown bats (Eptesicus fuscus) were trained in a two-choice task to discriminate differences in the delay of electronic echoes at 1.7 ms delay (30 cm simulated range). Difference thresholds (∼45 μs) were comparable to previously published results. At selected above-threshold differences (116 and 232 μs delay), performance was measured in the presence of wideband random noise at increasing amplitudes in 10-dB steps to determine the noise level that prevented discrimination. Performance eventually failed, but the bats increased the amplitude and duration of their broadcasts to compensate for increasing noise, which allowed performance to persist at noise levels about 25 dB higher than without compensation. In the 232-μs delay discrimination condition, echo signal-to-noise ratio (2E/N 0 ) was 8-10 dB at the noise level that depressed performance to chance. Predicted echo-delay accuracy using big brown bat signals follows the Cramér-Rao bound for signal-to-noise ratios above 15 dB, but worsens below 15 dB due to side-peak ambiguity. At 2E/N 0  = 7-10 dB, predicted Cramér-Rao delay accuracy would be about 1 μs; considering side-peak ambiguity it would be about 200-300 μs. The bats' 232 μs performance reflects the intrusion of side-peak ambiguity into delay accuracy at low signal-to-noise ratios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18420021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18420021"><span>Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>OConnor, William; Runquist, Elizabeth A</p> <p>2008-07-01</p> <p>Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26916842','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26916842"><span>Limits and possibilities in the geolocation of humans using multiple isotope ratios (H, O, N, C) of hair from east coast cities of the USA.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Reynard, Linda M; Burt, Nicole; Koon, Hannah E C; Tuross, Noreen</p> <p>2016-01-01</p> <p>We examined multiple natural abundance isotope ratios of human hair to assess biological variability within and between geographic locations and, further, to determine how well these isotope values predict location of origin. Sampling locations feature differing seasonality and mobile populations as a robust test of the method. Serially-sampled hair from Cambridge, MA, USA, shows lower δ(2)H and δ(18)O variability over a one-year time course than model-predicted precipitation isotope ratios, but exhibits considerable differences between individuals. Along a ∼13° north-south transect in the eastern USA (Brookline, MA, 42.3 ° N, College Park, MD, 39.0 ° N, and Gainesville, FL, 29.7 ° N) δ(18)O in human hair shows relatively greater differences and tracks changes in drinking water isotope ratios more sensitively than δ(2)H. Determining the domicile of humans using isotope ratios of hair can be confounded by differing variability in hair δ(18)O and δ(2)H between locations, differential incorporation of H and O into this protein and, in some cases, by tap water δ(18)O and δ(2)H that differ significantly from predicted precipitation values. With these caveats, randomly chosen people in Florida are separated from those in the two more northerly sites on the basis of the natural abundance isotopes of carbon, nitrogen, hydrogen, and oxygen.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528632','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528632"><span>Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23232736','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23232736"><span>Hemoglobin levels above anemia thresholds are maximally predictive for long-term survival in COPD with chronic respiratory failure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kollert, Florian; Tippelt, Andrea; Müller, Carolin; Jörres, Rudolf A; Porzelius, Christine; Pfeifer, Michael; Budweiser, Stephan</p> <p>2013-07-01</p> <p>In patients with COPD, chronic anemia is known as an unfavorable prognostic factor. Whether the association between hemoglobin (Hb) levels and long-term survival is restricted to anemia or extends to higher Hb levels has not yet been systematically assessed. We determined Hb levels in 309 subjects with COPD and chronic respiratory failure prior to initiation of noninvasive ventilation, accounting for confounders that might affect Hb. Subjects were categorized as anemic (Hb < 12 g/dL in females, Hb < 13 g/dL in males), polycythemic (Hb ≥ 15 g/dL in females, Hb ≥ 17 g/dL in males), or normocythemic. In addition, percentiles of Hb values were analyzed with regard to mortality from any cause. Two-hundred seven subjects (67.0%) showed normal Hb levels, 46 (14.9%) had anemia, and 56 (18.1%) had polycythemia. Polycythemic subjects showed a higher survival rate than anemic (P = .01) and normocythemic subjects (P = .043). In a univariate Cox hazards model, Hb was associated with long-term survival (hazard ratio 0.855; 95% CI 0.783-0.934, P < .001). The 58th percentiles of Hb (14.3 g/dL in females, 15.1 g/dL in males) yielded the highest discriminative value for predicting survival (hazard ratio 0.463, 95% CI 0.324-0.660, P < .001). In the multivariate analysis this cutoff was an independent predictor for survival (hazard ratio 0.627, 95% CI 0.414-0.949, P = .03), in addition to age and body mass index. In subjects with COPD and chronic respiratory failure undergoing treatment with noninvasive ventilation and LTOT, high Hb levels are associated with better long-term survival. The optimal cutoff level for prediction was above the established threshold defining anemia. Thus, predicting survival only on the basis of anemia does not fully utilize the prognostic potential of Hb values in COPD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170002791&hterms=sea&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dsea','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170002791&hterms=sea&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dsea"><span>How Much Global Burned Area Can Be Forecast on Seasonal Time Scales Using Sea Surface Temperatures?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, Yang; Morton, Douglas C.; Andela, Niels; Giglio, Louis; Randerson, James T.</p> <p>2016-01-01</p> <p>Large-scale sea surface temperature (SST) patterns influence the interannual variability of burned area in many regions by means of climate controls on fuel continuity, amount, and moisture content. Some of the variability in burned area is predictable on seasonal timescales because fuel characteristics respond to the cumulative effects of climate prior to the onset of the fire season. Here we systematically evaluated the degree to which annual burned area from the Global Fire Emissions Database version 4 with small fires (GFED4s) can be predicted using SSTs from 14 different ocean regions. We found that about 48 of global burned area can be forecast with a correlation coefficient that is significant at a p < 0.01 level using a single ocean climate index (OCI) 3 or more months prior to the month of peak burning. Continental regions where burned area had a higher degree of predictability included equatorial Asia, where 92% of the burned area exceeded the correlation threshold, and Central America, where 86% of the burned area exceeded this threshold. Pacific Ocean indices describing the El Nino-Southern Oscillation were more important than indices from other ocean basins, accounting for about 1/3 of the total predictable global burned area. A model that combined two indices from different oceans considerably improved model performance, suggesting that fires in many regions respond to forcing from more than one ocean basin. Using OCI-burned area relationships and a clustering algorithm, we identified 12 hotspot regions in which fires had a consistent response to SST patterns. Annual burned area in these regions can be predicted with moderate confidence levels, suggesting operational forecasts may be possible with the aim of improving ecosystem management.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.476.1224A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.476.1224A"><span>Starspot variability as an X-ray radiation proxy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arkhypov, Oleksiy V.; Khodachenko, Maxim L.; Lammer, Helmut; Güdel, Manuel; Lüftinger, Teresa; Johnstone, Colin P.</p> <p>2018-05-01</p> <p>Stellar X-ray emission plays an important role in the study of exoplanets as a proxy for stellar winds and as a basis for the prediction of extreme ultraviolet (EUV) flux, unavailable for direct measurements, which in their turn are important factors for the mass-loss of planetary atmospheres. Unfortunately, the detection thresholds limit the number of stars with the directly measured X-ray fluxes. At the same time, the known connection between the sunspots and X-ray sources allows using of the starspot variability as an accessible proxy for the stellar X-ray emission. To realize this approach, we analysed the light curves of 1729 main-sequence stars with rotation periods 0.5 < P < 30 d and effective temperatures 3236 < Teff < 7166 K observed by the Kepler mission. It was found that the squared amplitude of the first rotational harmonic of a stellar light curve may be used as a kind of activity index. This averaged index revealed practically the same relation with the Rossby number as that in the case of the X-ray to bolometric luminosity ratio Rx. As a result, the regressions for stellar X-ray luminosity Lx(P, Teff) and its related EUV analogue LEUV were obtained for the main-sequence stars. It was shown that these regressions allow prediction of average (over the considered stars) values of log (Lx) and log (LEUV) with typical errors of 0.26 and 0.22 dex, respectively. This, however, does not include the activity variations in particular stars related to their individual magnetic activity cycles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25024746','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25024746"><span>Brachial-to-ankle pulse wave velocity as an independent prognostic factor for ovulatory response to clomiphene citrate in women with polycystic ovary syndrome.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takahashi, Toshifumi; Igarashi, Hideki; Hara, Shuichiro; Amita, Mitsuyoshi; Matsuo, Koki; Hasegawa, Ayumi; Kurachi, Hirohisa</p> <p>2014-01-01</p> <p>Polycystic ovary syndrome (PCOS) has a risk for cardiovascular disease. Increased arterial stiffness has been observed in women with PCOS. The purpose of the present study was to investigate whether the brachial-to-ankle pulse wave velocity (baPWV) is a prognostic factor for ovulatory response to clomiphene citrate (CC) in women with PCOS. This study was a retrospective cohort study of 62 women with PCOS conducted from January 2009 to December 2012 at the university hospital, Yamagata, Japan. We analyzed 62 infertile PCOS patients who received CC. Ovulation was induced by 100 mg CC for 5 days. CC non-responder was defined as failure to ovulate for at least 2 consecutive CC-treatment cycles. The endocrine, metabolic, and cardiovascular parameters between CC responder (38 patients) and non-responder (24 patients) groups were analyzed. In univariate analysis, waist-to-hip ratio, level of free testosterone, percentages of patients with dyslipidemia, impaired glucose tolerance, and diabetes mellitus, blood glucose and insulin levels at 60 min and 120 min, the area under the curve of glucose and insulin after 75-g oral glucose intolerance test, and baPWV were significantly higher in CC non-responders compared with responders. In multivariate logistic regression analysis, both waist-to-hip ratio (odds ratio, 1.77; 95% confidence interval, 2.2-14.1; P=0.04) and baPWV (odds ratio, 1.71; 95% confidence interval, 1.1-2.8; P=0.03) were independent predictors of ovulation induction by CC in PCOS patients. The predictive values of waist-to-hip ratio and baPWV for the CC resistance in PCOS patients were determined by the receiver operating characteristic curves. The area under the curves for waist-to-hip ratio and baPWV were 0.76 and 0.77, respectively. Setting the threshold at 0.83 for waist-to-hip ratio offered the best compromise between specificity (0.65) and sensitivity (0.84), while the setting the threshold at 1,182 cm/s for baPWV offered the best compromise between specificity (0.80) and sensitivity (0.71). Both metabolic and cardiovascular parameters were predictive for CC resistance in PCOS patients. The measurement of baPWV may be a useful tool to predict ovulation in PCOS patients who receive CC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..546..526W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..546..526W"><span>Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Zong; Shi, Wenjiao</p> <p>2017-03-01</p> <p>Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26648047','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26648047"><span>Towards a clinically informed, data-driven definition of elderly onset epilepsy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Josephson, Colin B; Engbers, Jordan D T; Sajobi, Tolulope T; Jette, Nathalie; Agha-Khani, Yahya; Federico, Paolo; Murphy, William; Pillay, Neelan; Wiebe, Samuel</p> <p>2016-02-01</p> <p>Elderly onset epilepsy represents a distinct subpopulation that has received considerable attention due to the unique features of the disease in this age group. Research into this particular patient group has been limited by a lack of a standardized definition and understanding of the attributes associated with elderly onset epilepsy. We used a prospective cohort database to examine differences in patients stratified according to age of onset. Linear support vector machine learning incorporating all significant variables was used to predict age of onset according to prespecified thresholds. Sensitivity and specificity were calculated and plotted in receiver-operating characteristic (ROC) space. Feature coefficients achieving an absolute value of 0.25 or greater were graphed by age of onset to define how they vary with time. We identified 2,449 patients, of whom 149 (6%) had an age of seizure onset of 65 or older. Fourteen clinical variables had an absolute predictive value of at least 0.25 at some point over the age of epilepsy-onset spectrum. Area under the curve in ROC space was maximized between ages of onset of 65 and 70. Features identified through machine learning were frequently threshold specific and were similar, but not identical, to those revealed through simple univariable and multivariable comparisons. This study provides an empirical, clinically informed definition of "elderly onset epilepsy." If validated, an age threshold of 65-70 years can be used for future studies of elderly onset epilepsy and permits targeted interventions according to the patient's age of onset. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26110991','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26110991"><span>The prehospital intravenous access assessment: a prospective study on intravenous access failure and access delay in prehospital emergency medicine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim</p> <p>2016-12-01</p> <p>Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26743264','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26743264"><span>New non-invasive method for early detection of metabolic syndrome in the working population.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Romero-Saldaña, Manuel; Fuentes-Jiménez, Francisco J; Vaquero-Abellán, Manuel; Álvarez-Fernández, Carlos; Molina-Recio, Guillermo; López-Miranda, José</p> <p>2016-12-01</p> <p>We propose a new method for the early detection of metabolic syndrome in the working population, which was free of biomarkers (non-invasive) and based on anthropometric variables, and to validate it in a new working population. Prevalence studies and diagnostic test accuracy to determine the anthropometric variables associated with metabolic syndrome, as well as the screening validity of the new method proposed, were carried out between 2013 and 2015 on 636 and 550 workers, respectively. The anthropometric variables analysed were: blood pressure, body mass index, waist circumference, waist-height ratio, body fat percentage and waist-hip ratio. We performed a multivariate logistic regression analysis and obtained receiver operating curves to determine the predictive ability of the variables. The new method for the early detection of metabolic syndrome we present is based on a decision tree using chi-squared automatic interaction detection methodology. The overall prevalence of metabolic syndrome was 14.9%. The area under the curve for waist-height ratio and waist circumference was 0.91 and 0.90, respectively. The anthropometric variables associated with metabolic syndrome in the adjusted model were waist-height ratio, body mass index, blood pressure and body fat percentage. The decision tree was configured from the waist-height ratio (⩾0.55) and hypertension (blood pressure ⩾128/85 mmHg), with a sensitivity of 91.6% and a specificity of 95.7% obtained. The early detection of metabolic syndrome in a healthy population is possible through non-invasive methods, based on anthropometric indicators such as waist-height ratio and blood pressure. This method has a high degree of predictive validity and its use can be recommended in any healthcare context. © The European Society of Cardiology 2016.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24771618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24771618"><span>Evaluation of early weight loss thresholds for identifying nonresponders to an intensive lifestyle intervention.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Unick, Jessica L; Hogan, Patricia E; Neiberg, Rebecca H; Cheskin, Lawrence J; Dutton, Gareth R; Evans-Hudnall, Gina; Jeffery, Robert; Kitabchi, Abbas E; Nelson, Julie A; Pi-Sunyer, F Xavier; West, Delia Smith; Wing, Rena R</p> <p>2014-07-01</p> <p>Weight losses in lifestyle interventions are variable, yet prediction of long-term success is difficult. The utility of using various weight loss thresholds in the first 2 months of treatment for predicting 1-year outcomes was examined. Participants included 2327 adults with type 2 diabetes (BMI:35.8 ± 6.0) randomized to the intensive lifestyle intervention (ILI) of the Look AHEAD trial. ILI included weekly behavioral sessions designed to increase physical activity and reduce caloric intake. 1-month, 2-month, and 1-year weight changes were calculated. Participants failing to achieve a ≥2% weight loss at Month 1 were 5.6 (95% CI:4.5, 7.0) times more likely to also not achieve a ≥10% weight loss at Year 1, compared to those losing ≥2% initially. These odds were increased to 11.6 (95% CI:8.6, 15.6) when using a 3% weight loss threshold at Month 2. Only 15.2% and 8.2% of individuals failing to achieve the ≥2% and ≥3% thresholds at Months 1 and 2, respectively, go on to achieve a ≥10% weight loss at Year 1. Given the association between initial and 1-year weight loss, the first few months of treatment may be an opportune time to identify those who are unsuccessful and utilize rescue efforts. clinicaltrials.gov Identifier: NCT00017953. © 2014 The Obesity Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....53.2264L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....53.2264L"><span>A probabilistic approach to quantifying hydrologic thresholds regulating migration of adult Atlantic salmon into spawning streams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lazzaro, G.; Soulsby, C.; Tetzlaff, D.; Botter, G.</p> <p>2017-03-01</p> <p>Atlantic salmon is an economically and ecologically important fish species, whose survival is dependent on successful spawning in headwater rivers. Streamflow dynamics often have a strong control on spawning because fish require sufficiently high discharges to move upriver and enter spawning streams. However, these streamflow effects are modulated by biological factors such as the number and the timing of returning fish in relation to the annual spawning window in the fall/winter. In this paper, we develop and apply a novel probabilistic approach to quantify these interactions using a parsimonious outflux-influx model linking the number of female salmon emigrating (i.e., outflux) and returning (i.e., influx) to a spawning stream in Scotland. The model explicitly accounts for the interannual variability of the hydrologic regime and the hydrological connectivity of spawning streams to main rivers. Model results are evaluated against a detailed long-term (40 years) hydroecological data set that includes annual fluxes of salmon, allowing us to explicitly assess the role of discharge variability. The satisfactory model results show quantitatively that hydrologic variability contributes to the observed dynamics of salmon returns, with a good correlation between the positive (negative) peaks in the immigration data set and the exceedance (nonexceedance) probability of a threshold flow (0.3 m3/s). Importantly, model performance deteriorates when the interannual variability of flow regime is disregarded. The analysis suggests that flow thresholds and hydrological connectivity for spawning return represent a quantifiable and predictable feature of salmon rivers, which may be helpful in decision making where flow regimes are altered by water abstractions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21531085','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21531085"><span>What is the best way to contour lung tumors on PET scans? Multiobserver validation of a gradient-based method using a NSCLC digital PET phantom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Werner-Wasik, Maria; Nelson, Arden D; Choi, Walter; Arai, Yoshio; Faulhaber, Peter F; Kang, Patrick; Almeida, Fabio D; Xiao, Ying; Ohri, Nitin; Brockway, Kristin D; Piper, Jonathan W; Nelson, Aaron S</p> <p>2012-03-01</p> <p>To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10-37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7-264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. For spheres <20 mm in diameter, GRADIENT was the most accurate with a mean absolute % error in diameter of 8.15% (10.2% SD) compared with 49.2% (51.1% SD) for 45% THRESHOLD (p < 0.005). For larger spheres, the methods were statistically equivalent. For varying source-to-background ratios, GRADIENT was the most accurate for spheres >20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of -0.05% (16.2% SD) compared with 25% THRESHOLD at -2.1% (34.2% SD) and MANUAL at -16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene's test). GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in radiation therapy planning and response assessment. Copyright © 2012. Published by Elsevier Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26892131','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26892131"><span>Comparison between European and Iranian cutoff points of triglyceride/high-density lipoprotein cholesterol concentrations in predicting cardiovascular disease outcomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gharipour, Mojgan; Sadeghi, Masoumeh; Dianatkhah, Minoo; Nezafati, Pouya; Talaie, Mohammad; Oveisgharan, Shahram; Golshahi, Jafar</p> <p>2016-01-01</p> <p>High triglyceride (TG) and low high-density lipoprotein cholesterol (HDL-C) are important cardiovascular risk factors. The exact prognostic value of the TG/HDL-C ratio, a marker for cardiovascular events, is currently unknown among Iranians so this study sought to determine the optimal cutoff point for the TG/HDL-C ratio in predicting cardiovascular disease events in the Iranian population. The Isfahan Cohort Study (ICS) is an ongoing, longitudinal, population-based study that was originally conducted on adults aged ≥ 35 years, living in urban and rural areas of three districts in central Iran. After 10 years of follow-up, 5431 participants were re-evaluated using a standard protocol similar to the one used for baseline. At both measurements, participants underwent medical interviews, physical examinations, and fasting blood measurements. "High-risk" subjects were defined by the discrimination power of indices, which were assessed using receiver operating characteristic (ROC) analysis; the optimal cutoff point value for each index was then derived. The mean age of the participants was 50.7 ± 11.6 years. The TG/HDL-C ratio, at a threshold of 3.68, was used to screen for cardiovascular events among the study population. Subjects were divided into two groups ("low" and "high" risk) according to the TG/HDL-C concentration ratio at baseline. A slightly higher number of high-risk individuals were identified using the European cutoff points of 63.7% in comparison with the ICS cutoff points of 49.5%. The unadjusted hazard ratio (HR) was greatest in high-risk individuals identified by the ICS cutoff points (HR = 1.54, 95% CI [1.33-1.79]) vs European cutoff points (HR = 1.38, 95% [1.17-1.63]). There were no remarkable changes after adjusting for differences in sex and age (HR = 1.58, 95% CI [1.36-1.84] vs HR = 1.44, 95% CI [1.22-1.71]) for the ICS and European cutoff points, respectively. The threshold of TG/HDL ≥ 3.68 is the optimal cutoff point for predicting cardiovascular events in Iranian individuals. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012IJBm...56..811J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012IJBm...56..811J"><span>Reliability of the method of levels for determining cutaneous temperature sensitivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jakovljević, Miroljub; Mekjavić, Igor B.</p> <p>2012-09-01</p> <p>Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28328674','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28328674"><span>The value of the injury severity score in pediatric trauma: Time for a new definition of severe injury?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brown, Joshua B; Gestring, Mark L; Leeper, Christine M; Sperry, Jason L; Peitzman, Andrew B; Billiar, Timothy R; Gaines, Barbara A</p> <p>2017-06-01</p> <p>The Injury Severity Score (ISS) is the most commonly used injury scoring system in trauma research and benchmarking. An ISS greater than 15 conventionally defines severe injury; however, no studies evaluate whether ISS performs similarly between adults and children. Our objective was to evaluate ISS and Abbreviated Injury Scale (AIS) to predict mortality and define optimal thresholds of severe injury in pediatric trauma. Patients from the Pennsylvania trauma registry 2000-2013 were included. Children were defined as younger than 16 years. Logistic regression predicted mortality from ISS for children and adults. The optimal ISS cutoff for mortality that maximized diagnostic characteristics was determined in children. Regression also evaluated the association between mortality and maximum AIS in each body region, controlling for age, mechanism, and nonaccidental trauma. Analysis was performed in single and multisystem injuries. Sensitivity analyses with alternative outcomes were performed. Included were 352,127 adults and 50,579 children. Children had similar predicted mortality at ISS of 25 as adults at ISS of 15 (5%). The optimal ISS cutoff in children was ISS greater than 25 and had a positive predictive value of 19% and negative predictive value of 99% compared to a positive predictive value of 7% and negative predictive value of 99% for ISS greater than 15 to predict mortality. In single-system-injured children, mortality was associated with head (odds ratio, 4.80; 95% confidence interval, 2.61-8.84; p < 0.01) and chest AIS (odds ratio, 3.55; 95% confidence interval, 1.81-6.97; p < 0.01), but not abdomen, face, neck, spine, or extremity AIS (p > 0.05). For multisystem injury, all body region AIS scores were associated with mortality except extremities. Sensitivity analysis demonstrated ISS greater than 23 to predict need for full trauma activation, and ISS greater than 26 to predict impaired functional independence were optimal thresholds. An ISS greater than 25 may be a more appropriate definition of severe injury in children. Pattern of injury is important, as only head and chest injury drive mortality in single-system-injured children. These findings should be considered in benchmarking and performance improvement efforts. Epidemiologic study, level III.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22308659-robust-regression-noisy-data-fusion-scaling-laws','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22308659-robust-regression-noisy-data-fusion-scaling-laws"><span>Robust regression on noisy data for fusion scaling laws</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS</p> <p>2014-11-15</p> <p>We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25731192','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25731192"><span>Do ictal EEG characteristics predict treatment outcomes in schizophrenic patients undergoing electroconvulsive therapy?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simsek, Gulnihal Gokce; Zincir, Selma; Gulec, Huseyin; Eksioglu, Sevgin; Semiz, Umit Basar; Kurtulmus, Yasemin Sipka</p> <p>2015-08-01</p> <p>The aim of this study is to investigate the relationship between features of electroencephalography (EEG), including seizure time, energy threshold level and post-ictal suppression time, and clinical variables, including treatment outcomes and side-effects, among schizophrenia inpatients undergoing electroconvulsive therapy (ECT). This is a naturalistic follow-up study on schizophrenia patients, diagnosed using DSM-IV-TR criteria, treated by a psychosis inpatient service. All participants completed the Brief Psychiatric Rating Scale (BPRS), the Global Assessment of Functioning (GAF) scale, the Frontal Assessment Battery (FAB) and a Data Collection Form. Assessments were made before treatment, during ECT and after treatment. Statistically significant improvements in both clinical and cognitive outcome were noted after ECT in all patients. Predictors of improvement were sought by evaluating electrophysiological variables measured at three time points (after the third, fifth and seventh ECT sessions). Logistic regression analysis showed that clinical outcome/improvement did not differ by seizure duration, threshold energy level or post-ictal suppression time. We found that ictal EEG parameters measured at several ECT sessions did not predict clinical recovery/outcomes. This may be because our centre defensively engages in "very specific patient selection" when ECT is contemplated. ECT does not cause short-term cognitive functional impairment and indeed improves cognition, because symptoms of the schizophrenic episode are alleviated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27043918','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27043918"><span>Superior orientation discrimination and increased peak gamma frequency in autism spectrum conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dickinson, Abigail; Bruyns-Haylett, Michael; Smith, Richard; Jones, Myles; Milne, Elizabeth</p> <p>2016-04-01</p> <p>While perception is recognized as being atypical in individuals with autism spectrum conditions (ASC), the underlying mechanisms for such atypicality are unclear. Here we test the hypothesis that individuals with ASC will show enhanced orientation discrimination compared with neurotypical observers. This prediction is based both on anecdotal report of superior discriminatory skills in ASC and also on evidence in the auditory domain that some individuals with ASC have superior pitch discrimination. In order to establish whether atypical perception might be mediated by an imbalance in the ratio of neural excitation and inhibition (E:I ratio), we also measured peak gamma frequency, which provides an indication of neural inhibition levels. Using a rigorous thresholding method, we found that orientation discrimination thresholds for obliquely oriented stimuli were significantly lower in participants with ASC. Using EEG to measure the visually induced gamma band response, we also found that peak gamma frequency was higher in participants with ASC, relative to a well-matched control group. These novel results suggest that neural inhibition may be increased in the occipital cortex of individuals with ASC. Implications for existing theories of an imbalance in the E:I ratio of ASC are discussed. (c) 2016 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/1001040','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/1001040"><span>Elements of a predictive model for determining beach closures on a real time basis: the case of 63rd Street Beach Chicago</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Olyphant, Greg A.; Whitman, Richard L.</p> <p>2004-01-01</p> <p>Data on hydrometeorological conditions and E. coli concentration were simultaneously collected on 57 occasions during the summer of 2000 at 63rd Street Beach, Chicago, Illinois. The data were used to identify and calibrate a statistical regression model aimed at predicting when the bacterial concentration of the beach water was above or below the level considered safe for full body contact. A wide range of hydrological, meteorological, and water quality variables were evaluated as possible predictive variables. These included wind speed and direction, incoming solar radiation (insolation), various time frames of rainfall, air temperature, lake stage and wave height, and water temperature, specific conductance, dissolved oxygen, pH, and turbidity. The best-fit model combined real-time measurements of wind direction and speed (onshore component of resultant wind vector), rainfall, insolation, lake stage, water temperature and turbidity to predict the geometric mean E.coliconcentration in the swimming zone of the beach. The model, which contained both additive and multiplicative (interaction) terms, accounted for 71% of the observed variability in the log E. coliconcentrations. A comparison between model predictions of when the beach should be closed and when the actualbacterial concentrations were above or below the 235 cfu 100 ml-1 threshold value, indicated that the model accurately predicted openingsversus closures 88% of the time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4534412','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4534412"><span>Length-Based Assessment of Coral Reef Fish Populations in the Main and Northwestern Hawaiian Islands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nadon, Marc O.; Ault, Jerald S.; Williams, Ivor D.; Smith, Steven G.; DiNardo, Gerard T.</p> <p>2015-01-01</p> <p>The coral reef fish community of Hawaii is composed of hundreds of species, supports a multimillion dollar fishing and tourism industry, and is of great cultural importance to the local population. However, a major stock assessment of Hawaiian coral reef fish populations has not yet been conducted. Here we used the robust indicator variable “average length in the exploited phase of the population (L¯)”, estimated from size composition data from commercial fisheries trip reports and fishery-independent diver surveys, to evaluate exploitation rates for 19 Hawaiian reef fishes. By and large, the average lengths obtained from diver surveys agreed well with those from commercial data. We used the estimated exploitation rates coupled with life history parameters synthesized from the literature to parameterize a numerical population model and generate stock sustainability metrics such as spawning potential ratios (SPR). We found good agreement between predicted average lengths in an unfished population (from our population model) and those observed from diver surveys in the largely unexploited Northwestern Hawaiian Islands. Of 19 exploited reef fish species assessed in the main Hawaiian Islands, 9 had SPRs close to or below the 30% overfishing threshold. In general, longer-lived species such as surgeonfishes, the redlip parrotfish (Scarus rubroviolaceus), and the gray snapper (Aprion virescens) had the lowest SPRs, while short-lived species such as goatfishes and jacks, as well as two invasive species (Lutjanus kasmira and Cephalopholis argus), had SPRs above the 30% threshold. PMID:26267473</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20105526','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20105526"><span>Evaluation of nonesterified fatty acids and beta-hydroxybutyrate in transition dairy cattle in the northeastern United States: Critical thresholds for prediction of clinical diseases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ospina, P A; Nydam, D V; Stokol, T; Overton, T R</p> <p>2010-02-01</p> <p>The objectives of this study were to 1) establish cow-level critical thresholds for serum concentrations of nonesterified fatty acids (NEFA) and beta-hydroxybutyrate (BHBA) to predict periparturient diseases [displaced abomasa (DA), clinical ketosis (CK), metritis and retained placenta, or any of these three], and 2) investigate the magnitude of the metabolites' association with these diseases within 30 d in milk. In a prospective cohort study of 100 freestall, total mixed ration-fed herds in the northeastern United States, blood samples were collected from approximately 15 prepartum and 15 different postpartum transition animals in each herd, for a total of 2,758 samples. Serum NEFA concentrations were measured in the prepartum group, and both NEFA and BHBA were measured in the postpartum group. The critical thresholds for NEFA or BHBA were evaluated with receiver operator characteristic analysis for all diseases in both cohorts. The risk ratios (RR) of a disease outcome given NEFA or BHBA concentrations and other covariates were modeled with multivariable regression techniques, accounting for clustering of cows within herds. The NEFA critical threshold that predicted any of the 3 diseases in the prepartum cohort was 0.29mEq/L and in the postpartum cohort was 0.57mEq/L. The critical threshold for serum BHBA in the postpartum cohort was 10mg/dL, which predicted any of the 3 diseases. All RR with NEFA as a predictor of disease were >1.8; however, RR were greatest in animals sampled postpartum (e.g., RR for DA=9.7; 95% CI=4.2 to 22.4. All RR with BHBA as the predictor of disease were >2.3 (e.g., RR for DA=6.9; 95% CI=3.7 to 12.9). Although prepartum NEFA and postpartum BHBA were both significantly associated with development of clinical disease, postpartum serum NEFA concentration was most associated with the risk of developing DA, CK, metritis, or retained placenta during the first 30 d in milk. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29466561','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29466561"><span>Comment on Hall et al. (2017), "How to Choose Between Measures of Tinnitus Loudness for Clinical Research? A Report on the Reliability and Validity of an Investigator-Administered Test and a Patient-Reported Measure Using Baseline Data Collected in a Phase IIa Drug Trial".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sabour, Siamak</p> <p>2018-03-08</p> <p>The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7163E..0TS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7163E..0TS"><span>Computational model of retinal photocoagulation and rupture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel</p> <p>2009-02-01</p> <p>In patterned scanning laser photocoagulation, shorter duration (< 20 ms) pulses help reduce thermal damage beyond the photoreceptor layer, decrease treatment time and minimize pain. However, safe therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation) decreases for shorter exposures. To quantify the extent of thermal damage in the retina, and maximize the therapeutic window, we developed a computational model of retinal photocoagulation and rupture. Model parameters were adjusted to match measured thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Computed lesion width agreed with histological measurements in a wide range of pulse durations and power. Application of ring-shaped beam profile was predicted to double the therapeutic window width for exposures in the range of 1 - 10 ms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24449013','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24449013"><span>Auditory-motor integration of subliminal phase shifts in tapping: better than auditory discrimination would predict.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill</p> <p>2014-04-01</p> <p>Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3958924','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3958924"><span>Auditory-motor integration of subliminal phase shifts in tapping: Better than auditory discrimination would predict</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill</p> <p>2014-01-01</p> <p>Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=sex+AND+teen&pg=5&id=EJ638520','ERIC'); return false;" href="https://eric.ed.gov/?q=sex+AND+teen&pg=5&id=EJ638520"><span>On the Relationship between Marital Opportunity and Teen Pregnancy: The Sex Ratio Question.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Barber, Nigel</p> <p>2001-01-01</p> <p>Used United Nations cross-national data to examine the relationship between low sex ratio, marital opportunity, and teen pregnancy. Geographical region, per capita gross national product, marital rate, and urban and rural status were used as control variables in analyses that utilized sex ratios to predict teen births. Overall, early childbearing…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5580596','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5580596"><span>Spatio-Temporal Distribution of Vector-Host Contact (VHC) Ratios and Ecological Niche Modeling of the West Nile Virus Mosquito Vector, Culex quinquefasciatus, in the City of New Orleans, LA, USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Michaels, Sarah R.; Riegel, Claudia; Pereira, Roberto M.; Zipperer, Wayne; Lockaby, B. Graeme; Koehler, Philip G.</p> <p>2017-01-01</p> <p>The consistent sporadic transmission of West Nile Virus (WNV) in the city of New Orleans justifies the need for distribution risk maps highlighting human risk of mosquito bites. We modeled the influence of biophysical and socioeconomic metrics on the spatio-temporal distributions of presence/vector-host contact (VHC) ratios of WNV vector, Culex quinquefasciatus, within their flight range. Biophysical and socioeconomic data were extracted within 5-km buffer radii around sampling localities of gravid female Culex quinquefasciatus. The spatio-temporal correlations between VHC data and 33 variables, including climate, land use-land cover (LULC), socioeconomic, and land surface terrain were analyzed using stepwise linear regression models (RM). Using MaxEnt, we developed a distribution model using the correlated predicting variables. Only 12 factors showed significant correlations with spatial distribution of VHC ratios (R2 = 81.62, p < 0.01). Non-forested wetland (NFWL), tree density (TD) and residential-urban (RU) settings demonstrated the strongest relationship. The VHC ratios showed monthly environmental resilience in terms of number and type of influential factors. The highest prediction power of RU and other urban and built up land (OUBL), was demonstrated during May–August. This association was positively correlated with the onset of the mosquito WNV infection rate during June. These findings were confirmed by the Jackknife analysis in MaxEnt and independently collected field validation points. The spatial and temporal correlations of VHC ratios and their response to the predicting variables are discussed. PMID:28786934</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28786934','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28786934"><span>Spatio-Temporal Distribution of Vector-Host Contact (VHC) Ratios and Ecological Niche Modeling of the West Nile Virus Mosquito Vector, Culex quinquefasciatus, in the City of New Orleans, LA, USA.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sallam, Mohamed F; Michaels, Sarah R; Riegel, Claudia; Pereira, Roberto M; Zipperer, Wayne; Lockaby, B Graeme; Koehler, Philip G</p> <p>2017-08-08</p> <p>The consistent sporadic transmission of West Nile Virus (WNV) in the city of New Orleans justifies the need for distribution risk maps highlighting human risk of mosquito bites. We modeled the influence of biophysical and socioeconomic metrics on the spatio-temporal distributions of presence/vector-host contact (VHC) ratios of WNV vector, Culex quinquefasciatus , within their flight range . Biophysical and socioeconomic data were extracted within 5-km buffer radii around sampling localities of gravid female Culex quinquefasciatus . The spatio-temporal correlations between VHC data and 33 variables, including climate, land use-land cover (LULC), socioeconomic, and land surface terrain were analyzed using stepwise linear regression models (RM). Using MaxEnt, we developed a distribution model using the correlated predicting variables. Only 12 factors showed significant correlations with spatial distribution of VHC ratios ( R ² = 81.62, p < 0.01). Non-forested wetland (NFWL), tree density (TD) and residential-urban (RU) settings demonstrated the strongest relationship. The VHC ratios showed monthly environmental resilience in terms of number and type of influential factors. The highest prediction power of RU and other urban and built up land (OUBL), was demonstrated during May-August. This association was positively correlated with the onset of the mosquito WNV infection rate during June. These findings were confirmed by the Jackknife analysis in MaxEnt and independently collected field validation points. The spatial and temporal correlations of VHC ratios and their response to the predicting variables are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4340057','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4340057"><span>Predicting successful long-term weight loss from short-term weight-loss outcomes: new insights from a dynamic energy balance model (the POUNDS Lost study)123</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ivanescu, Andrada E; Martin, Corby K; Heymsfield, Steven B; Marshall, Kaitlyn; Bodrato, Victoria E; Williamson, Donald A; Anton, Stephen D; Sacks, Frank M; Ryan, Donna; Bray, George A</p> <p>2015-01-01</p> <p>Background: Currently, early weight-loss predictions of long-term weight-loss success rely on fixed percent-weight-loss thresholds. Objective: The objective was to develop thresholds during the first 3 mo of intervention that include the influence of age, sex, baseline weight, percent weight loss, and deviations from expected weight to predict whether a participant is likely to lose 5% or more body weight by year 1. Design: Data consisting of month 1, 2, 3, and 12 treatment weights were obtained from the 2-y Preventing Obesity Using Novel Dietary Strategies (POUNDS Lost) intervention. Logistic regression models that included covariates of age, height, sex, baseline weight, target energy intake, percent weight loss, and deviation of actual weight from expected were developed for months 1, 2, and 3 that predicted the probability of losing <5% of body weight in 1 y. Receiver operating characteristic (ROC) curves, area under the curve (AUC), and thresholds were calculated for each model. The AUC statistic quantified the ROC curve’s capacity to classify participants likely to lose <5% of their body weight at the end of 1 y. The models yielding the highest AUC were retained as optimal. For comparison with current practice, ROC curves relying solely on percent weight loss were also calculated. Results: Optimal models for months 1, 2, and 3 yielded ROC curves with AUCs of 0.68 (95% CI: 0.63, 0.74), 0.75 (95% CI: 0.71, 0.81), and 0.79 (95% CI: 0.74, 0.84), respectively. Percent weight loss alone was not better at identifying true positives than random chance (AUC ≤0.50). Conclusions: The newly derived models provide a personalized prediction of long-term success from early weight-loss variables. The predictions improve on existing fixed percent-weight-loss thresholds. Future research is needed to explore model application for informing treatment approaches during early intervention. The POUNDS Lost study was registered at clinicaltrials.gov as NCT00072995. PMID:25733628</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28421376','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28421376"><span>Using the STOPBANG questionnaire and other pre-test probability tools to predict OSA in younger, thinner patients referred to a sleep medicine clinic.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McMahon, Michael J; Sheikh, Karen L; Andrada, Teotimo F; Holley, Aaron B</p> <p>2017-12-01</p> <p>The STOPBANG questionnaire is used to predict the presence of obstructive sleep apnea (OSA). We sought to assess the performance of the STOPBANG questionnaire in younger, thinner patients referred to a sleep medicine clinic. We applied the STOPBANG questionnaire to patients referred for level I polysomnography (PSG) at our sleep center. We calculated likelihood ratios and area under the receiver operator characteristic (AUROC) curve and performed sensitivity analyses. We performed our analysis on 338 patients referred for PSG. Only 17.2% (n = 58) were above age 50 years, and 30.5 and 6.8% had a BMI above 30 and 35 years, respectively. The mean apnea-hypopnea index (AHI) was 12.9 ± 16.4 and 63.9% had an AHI ≥5. The STOPBANG (threshold ≥3) identified 83.1% of patients as high risk for an AHI ≥5, and sensitivity, specificity, positive (PPV), and negative predictive values (NPV) were 83.8, 18.0, 64.4, and 38.0%, respectively. Positive and negative likelihood ratios were poor at 1.02-1.11 and 0.55-0.90, respectively, across AHI thresholds (AHI ≥5, AHI ≥15 and AHI ≥30), and AUROCs were 0.52 (AHI ≥5) and 0.56 (AHI ≥15). Sensitivity analyses adjusting for insomnia, combat deployment, traumatic brain injury, post-traumatic stress disorder, clinically significant OSA (ESS >10 and/or co-morbid disease), and obesity did not significantly alter STOPBANG performance. In a younger, thinner population with predominantly mild-to-moderate OSA, the STOPBANG Score does not accurately predict the presence of obstructive sleep apnea.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28775139','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28775139"><span>Permeability Surface of Deep Middle Cerebral Artery Territory on Computed Tomographic Perfusion Predicts Hemorrhagic Transformation After Stroke.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Qiao; Gao, Xinyi; Yao, Zhenwei; Feng, Xiaoyuan; He, Huijin; Xue, Jing; Gao, Peiyi; Yang, Lumeng; Cheng, Xin; Chen, Weijian; Yang, Yunjun</p> <p>2017-09-01</p> <p>Permeability surface (PS) on computed tomographic perfusion reflects blood-brain barrier permeability and is related to hemorrhagic transformation (HT). HT of deep middle cerebral artery (MCA) territory can occur after recanalization of proximal large-vessel occlusion. We aimed to determine the relationship between HT and PS of deep MCA territory. We retrospectively reviewed 70 consecutive acute ischemic stroke patients presenting with occlusion of the distal internal carotid artery or M1 segment of the MCA. All patients underwent computed tomographic perfusion within 6 hours after symptom onset. Computed tomographic perfusion data were postprocessed to generate maps of different perfusion parameters. Risk factors were identified for increased deep MCA territory PS. Receiver operating characteristic curve analysis was performed to calculate the optimal PS threshold to predict HT of deep MCA territory. Increased PS was associated with HT of deep MCA territory. After adjustments for age, sex, onset time to computed tomographic perfusion, and baseline National Institutes of Health Stroke Scale, poor collateral status (odds ratio, 7.8; 95% confidence interval, 1.67-37.14; P =0.009) and proximal MCA-M1 occlusion (odds ratio, 4.12; 95% confidence interval, 1.03-16.52; P =0.045) were independently associated with increased deep MCA territory PS. Relative PS most accurately predicted HT of deep MCA territory (area under curve, 0.94; optimal threshold, 2.89). Increased PS can predict HT of deep MCA territory after recanalization therapy for cerebral proximal large-vessel occlusion. Proximal MCA-M1 complete occlusion and distal internal carotid artery occlusion in conjunction with poor collaterals elevate deep MCA territory PS. © 2017 American Heart Association, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014HESSD..1111281H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014HESSD..1111281H"><span>Performance and robustness of probabilistic river forecasts computed with quantile regression based on multiple independent variables in the North Central USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoss, F.; Fischbeck, P. S.</p> <p>2014-10-01</p> <p>This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12147599','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12147599"><span>Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C</p> <p>2002-08-01</p> <p>To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27176526','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27176526"><span>High Confinement Mode and Edge Localized Mode Characteristics in a Near-Unity Aspect Ratio Tokamak.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thome, K E; Bongard, M W; Barr, J L; Bodner, G M; Burke, M G; Fonck, R J; Kriete, D M; Perry, J M; Schlossberg, D J</p> <p>2016-04-29</p> <p>Tokamak experiments at near-unity aspect ratio A≲1.2 offer new insights into the self-organized H-mode plasma confinement regime. In contrast to conventional A∼3 plasmas, the L-H power threshold P_{LH} is ∼15× higher than scaling predictions, and it is insensitive to magnetic topology, consistent with modeling. Edge localized mode (ELM) instabilities shift to lower toroidal mode numbers as A decreases. These ultralow-A operations enable heretofore inaccessible J_{edge}(R,t) measurements through an ELM that show a complex multimodal collapse and the ejection of a current-carrying filament.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1253176-high-confinement-mode-edge-localized-mode-characteristics-near-unity-aspect-ratio-tokamak','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1253176-high-confinement-mode-edge-localized-mode-characteristics-near-unity-aspect-ratio-tokamak"><span>High confinement mode and edge localized mode characteristics in a near-unity aspect ratio tokamak</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Thome, Kathreen E.; Bongard, Michael W.; Barr, Jayson L.; ...</p> <p>2016-04-27</p> <p>Tokamak experiments at near-unity aspect ratio A ≲ 1.2 offer new insights into the self-organized H-mode plasma confinement regime. In contrast to conventional A ~ 3 plasmas, the L–H power threshold P LH is ~15× higher than scaling predictions, and it is insensitive to magnetic topology, consistent with modeling. Edge localized mode (ELM) instabilities shift to lower toroidal mode numbers as A decreases. Furthermore, these ultralow-A operations enable heretofore inaccessible J edge(R,t) measurements through an ELM that show a complex multimodal collapse and the ejection of a current-carrying filament.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1994926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1994926"><span>Vibratory Adaptation of Cutaneous Mechanoreceptive Afferents</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bensmaïa, S. J.; Leung, Y. Y.; Hsiao, S. S.; Johnson, K. O.</p> <p>2007-01-01</p> <p>The objective of this study was to investigate the effects of extended suprathreshold vibratory stimulation on the sensitivity of slowly adapting type 1 (SA1), rapidly adapting (RA), and Pacinian (PC) afferents. To that end, an algorithm was developed to track afferent absolute (I0) and entrainment (I1) thresholds as they change over time. We recorded afferent responses to periliminal vibratory test stimuli, which were interleaved with intense vibratory conditioning stimuli during the adaptation period of each experimental run. From these measurements, the algorithm allowed us to infer changes in the afferents’ sensitivity. We investigated the stimulus parameters that affect adaptation by assessing the degree to which adaptation depends on the amplitude and frequency of the adapting stimulus. For all three afferent types, I0 and I1 increased with increasing adaptation frequency and amplitude. The degree of adaptation seems to be independent of the firing rate evoked in the afferent by the conditioning stimulus. In the analysis, we distinguished between additive adaptation (in which I0 and I1 shift equally) and multiplicative effects (in which the ratio I1/I0 remains constant). RA threshold shifts are almost perfectly additive. SA1 threshold shifts are close to additive and far from multiplicative (I1 threshold shifts are twice the shifts). PC shifts are more difficult to classify. We used an I0 integrate-and-fire model to study the possible neural mechanisms. A change in transducer gain predicts a multiplicative change in I0 and I1 and is thus ruled out as a mechanism underlying SA1 and RA adaptation. A change in the resting action potential threshold predicts equal, additive change in I0 and I1 and thus accounts well for RA adaptation. A change in the degree of refractoriness during the relative refractory period predicts an additional change in I1 such as that observed for SA1 fibers. We infer that adaptation is caused by an increase in spiking thresholds produced by ion flow through transducer channels in the receptor membrane. In a companion paper, we describe the time-course of vibratory adaptation and recovery for SA1, RA, and PC fibers. PMID:16014802</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29212772','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29212772"><span>Modifiable pathways in Alzheimer's disease: Mendelian randomisation analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Larsson, Susanna C; Traylor, Matthew; Malik, Rainer; Dichgans, Martin; Burgess, Stephen; Markus, Hugh S</p> <p>2017-12-06</p> <p>To determine which potentially modifiable risk factors, including socioeconomic, lifestyle/dietary, cardiometabolic, and inflammatory factors, are associated with Alzheimer's disease. Mendelian randomisation study using genetic variants associated with the modifiable risk factors as instrumental variables. International Genomics of Alzheimer's Project. 17 008 cases of Alzheimer's disease and 37 154 controls. Odds ratio of Alzheimer's per genetically predicted increase in each modifiable risk factor estimated with Mendelian randomisation analysis. This study included analyses of 24 potentially modifiable risk factors. A Bonferroni corrected threshold of P=0.002 was considered to be significant, and P<0.05 was considered suggestive of evidence for a potential association. Genetically predicted educational attainment was significantly associated with Alzheimer's. The odds ratios were 0.89 (95% confidence interval 0.84 to 0.93; P=2.4×10 -6 ) per year of education completed and 0.74 (0.63 to 0.86; P=8.0×10 -5 ) per unit increase in log odds of having completed college/university. The correlated trait intelligence had a suggestive association with Alzheimer's (per genetically predicted 1 SD higher intelligence: 0.73, 0.57 to 0.93; P=0.01). There was suggestive evidence for potential associations between genetically predicted higher quantity of smoking (per 10 cigarettes a day: 0.69, 0.49 to 0.99; P=0.04) and 25-hydroxyvitamin D concentrations (per 20% higher levels: 0.92, 0.85 to 0.98; P=0.01) and lower odds of Alzheimer's and between higher coffee consumption (per one cup a day: 1.26, 1.05 to 1.51; P=0.01) and higher odds of Alzheimer's. Genetically predicted alcohol consumption, serum folate, serum vitamin B 12 , homocysteine, cardiometabolic factors, and C reactive protein were not associated with Alzheimer's disease. These results provide support that higher educational attainment is associated with a reduced risk of Alzheimer's disease. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27684043','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27684043"><span>High-resolution tide projections reveal extinction threshold in response to sea-level rise.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Field, Christopher R; Bayard, Trina S; Gjerdrum, Carina; Hill, Jason M; Meiman, Susan; Elphick, Chris S</p> <p>2017-05-01</p> <p>Sea-level rise will affect coastal species worldwide, but models that aim to predict these effects are typically based on simple measures of sea level that do not capture its inherent complexity, especially variation over timescales shorter than 1 year. Coastal species might be most affected, however, by floods that exceed a critical threshold. The frequency and duration of such floods may be more important to population dynamics than mean measures of sea level. In particular, the potential for changes in the frequency and duration of flooding events to result in nonlinear population responses or biological thresholds merits further research, but may require that models incorporate greater resolution in sea level than is typically used. We created population simulations for a threatened songbird, the saltmarsh sparrow (Ammodramus caudacutus), in a region where sea level is predictable with high accuracy and precision. We show that incorporating the timing of semidiurnal high tide events throughout the breeding season, including how this timing is affected by mean sea-level rise, predicts a reproductive threshold that is likely to cause a rapid demographic shift. This shift is likely to threaten the persistence of saltmarsh sparrows beyond 2060 and could cause extinction as soon as 2035. Neither extinction date nor the population trajectory was sensitive to the emissions scenarios underlying sea-level projections, as most of the population decline occurred before scenarios diverge. Our results suggest that the variation and complexity of climate-driven variables could be important for understanding the potential responses of coastal species to sea-level rise, especially for species that rely on coastal areas for reproduction. © 2016 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/32527','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/32527"><span>Accuracy and equivalence testing of crown ratio models and assessment of their impact on diameter growth and basal area increment predictions of two variants of the Forest Vegetation Simulator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston</p> <p>2009-01-01</p> <p>Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.C23A1206Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.C23A1206Y"><span>Predicting critical thresholds in outlet glacier terminus behavior, Disko and Uummannaq Bays, West Greenland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>York, A.; Frey, K. E.; Das, S. B.</p> <p>2017-12-01</p> <p>The seasonal and interannual variability in outlet glacier terminus position is an important indicator of overall glacier health and the net effects of ice-ocean-atmosphere interactions. However, challenges arise in determining a primary driver of glacier change, as the magnitude of retreat observed at the terminus is controlled not only by atmospheric and oceanic temperatures, but also physical constraints unique to each glacier (e.g., ice mélange buttressing and underlying bedrock/bathymetry) which often lead to a non-linear response to climate. For example, previous studies have shown varying magnitudes of terminus retreat over the last 40 years at glaciers in West Greenland, despite exposure to similar atmospheric forcings. Satellite imagery can provide the necessary spatially- and temporally-extensive resource for monitoring glacier terminus behavior. Here, we constructed a time series of 18 glacier termini digitized from over 1200 all-season Landsat images between 1985 and 2015 within Disko and Uummannaq Bays, West Greenland. We calculated change points in the annual maximum terminus retreat of the glaciers using a bootstrapping algorithm within a change point detection software. We interpolated the average monthly retreat of each terminus in order to calculate the average seasonal amplitude of each year. We found the 11 glaciers in Uummannaq Bay retreated an average of -1.26 ± 1.36 km, while the seven glaciers in Disko Bay averaged -1.13 ± 0.82 km. The majority of glaciers retreated, yet we see no latitudinal trend in magnitude of retreat on either a seasonal or long-term scale. We observe change points in the annual maximum retreat of four glacier termini in Uummannaq Bay and one in Disko Bay which are generally coincident with increased summer sea surface temperatures. In some cases, we observed smaller interannual variability in the average seasonal amplitude of years leading up to a critical threshold, followed by an increase in seasonal variability in the year prior and throughout the regime shift, until returning to a similar range of variability observed prior to the shift. As such, our findings may provide a method to predict an approaching change point at glacier termini which have not yet crossed a critical threshold through observations of increases in seasonal amplitude variability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014NHESS..14...53B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014NHESS..14...53B"><span>Assessing the predictability of fire occurrence and area burned across phytoclimatic regions in Spain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bedia, J.; Herrera, S.; Gutiérrez, J. M.</p> <p>2014-01-01</p> <p>Most fire protection agencies throughout the world have developed forest fire risk forecast systems, usually building upon existing fire danger indices and meteorological forecast data. In this context, the daily predictability of wildfires is of utmost importance in order to allow the fire protection agencies to issue timely fire hazard alerts. In this study, we address the predictability of daily fire occurrence using the components of the Canadian Fire Weather Index (FWI) System and related variables calculated from the latest ECMWF (European Centre for Medium Range Weather Forecasts) reanalysis, ERA-Interim. We develop daily fire occurrence models in peninsular Spain for the period 1990-2008 and, considering different minimum burned area thresholds for fire definition, assess their ability to reproduce the inter-annual fire frequency variability. We based the analysis on a phytoclimatic classification aiming the stratification of the territory into homogeneous units in terms of climatic and fuel type characteristics, allowing to test model performance under different climate/fuel conditions. We then extend the analysis in order to assess the predictability of monthly burned areas. The sensitivity of the models to the level of spatial aggregation of the data is also evaluated. Additionally, we investigate the gain in model performance with the inclusion of socioeconomic and land use/land cover (LULC) covariates in model formulation. Fire occurrence models have attained good performance in most of the phytoclimatic zones considered, being able to faithfully reproduce the inter-annual variability of fire frequency. Total area burned has exhibited some dependence on the meteorological drivers, although model performance was poor in most cases. We identified temperature and some FWI system components as the most important explanatory variables, highlighting the adequacy of the FWI system for fire occurrence prediction in the study area. The results were improved when using aggregated data across regions compared to when data were sampled at the grid-box level. The inclusion of socioeconomic and LULC covariates contributed marginally to the improvement of the models, and in most cases attained no relevant contribution to total explained variance - excepting northern Spain, where anthropogenic factors are known to be the major driver of fires. Models of monthly fire counts performed better in the case of fires larger than 0.1 ha, and for the rest of the thresholds (1, 10 and 100 ha) the daily occurrence models improved the predicted inter-annual variability, indicating the added value of daily models. Fire frequency predictions may provide a preferable basis for past fire history reconstruction, long-term monitoring and the assessment of future climate impacts on fire regimes across regions, posing several advantages over burned area as a response variable. Our results leave the door open to the development a more complex modelling framework based on daily data from numerical climate model outputs based on the FWI system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..125a2057R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..125a2057R"><span>Bacterial pattern and role of laboratory parameters as marker for neonatal sepsis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ruslie, R. H.; Tjipta, D. G.; Samosir, C. T.; Hasibuan, B. S.</p> <p>2018-03-01</p> <p>World Health Organization (WHO) recorded 5 million neonatal mortality each year due to sepsis, and 98% were in developing countries. Diagnosis of neonatal sepsis needs to be confirmed with a positive culture from normally sterile sites. On the other hand, postponing treatment will worsen the disease and increase mortality. This study conducted to evaluate the bacterial pattern of neonatal sepsis and to compare laboratory parameter differences between suspected and confirmed sepsis. It was a retrospective analytic study on 94 neonates in Perinatology Division, Adam Malik General Hospital Medan, from November 2016 until January 2017. Blood cultures were taken to confirm the diagnosis. Laboratory parameters collected from medical records. Variables with significant results analyzed for their accuracy. P<0.05 were considered statistically significant with 95% confidence interval.Out of 94 neonates, culture positives found in 55.3% neonates, with most common etiology was Klebsiella pneumonia (22.6%). There were significant neutrophil/lymphocyte ratio and procalcitonin differences between suspected and confirmed sepsis (p 0.025 and 0.008 respectively). With a diagnostic threshold of 9.4, sensitivity and specificity of neutrophil/lymphocyte ratio were 61.5% and 66.7%, respectively. Procalcitonin sensitivity and specificity were 84.6% and 71.4%, respectively, with 3.6mg/L diagnostic threshold. Neutrophil/lymphocyte ratio and procalcitonin were significantly higher in confirmed sepsis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy..tmp...24Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy..tmp...24Y"><span>Variability and predictability of decadal mean temperature and precipitation over China in the CCSM4 last millennium simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ying, Kairan; Frederiksen, Carsten S.; Zheng, Xiaogu; Lou, Jiale; Zhao, Tianbao</p> <p>2018-02-01</p> <p>The modes of variability that arise from the slow-decadal (potentially predictable) and intra-decadal (unpredictable) components of decadal mean temperature and precipitation over China are examined, in a 1000 year (850-1850 AD) experiment using the CCSM4 model. Solar variations, volcanic aerosols, orbital forcing, land use, and greenhouse gas concentrations provide the main forcing and boundary conditions. The analysis is done using a decadal variance decomposition method that identifies sources of potential decadal predictability and uncertainty. The average potential decadal predictabilities (ratio of slow-to-total decadal variance) are 0.62 and 0.37 for the temperature and rainfall over China, respectively, indicating that the (multi-)decadal variations of temperature are dominated by slow-decadal variability, while precipitation is dominated by unpredictable decadal noise. Possible sources of decadal predictability for the two leading predictable modes of temperature are the external radiative forcing, and the combined effects of slow-decadal variability of the Arctic oscillation (AO) and the Pacific decadal oscillation (PDO), respectively. Combined AO and PDO slow-decadal variability is associated also with the leading predictable mode of precipitation. External radiative forcing as well as the slow-decadal variability of PDO are associated with the second predictable rainfall mode; the slow-decadal variability of Atlantic multi-decadal oscillation (AMO) is associated with the third predictable precipitation mode. The dominant unpredictable decadal modes are associated with intra-decadal/inter-annual phenomena. In particular, the El Niño-Southern Oscillation and the intra-decadal variability of the AMO, PDO and AO are the most important sources of prediction uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70025244','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70025244"><span>Immobilization thresholds of electrofishing relative to fish size</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Dolan, C.R.; Miranda, L.E.</p> <p>2003-01-01</p> <p>Fish size and electrical waveforms have frequently been associated with variation in electrofishing effectiveness. Under controlled laboratory conditions, we measured the electrical power required by five electrical waveforms to immobilize eight fish species of diverse sizes and shapes. Fish size was indexed by total body length, surface area, volume, and weight; shape was indexed by the ratio of body length to body depth. Our objectives were to identify immobilization thresholds, elucidate the descriptors of fish size that were best associated with those immobilization thresholds, and determine whether the vulnerability of a species relative to other species remained constant across electrical treatments. The results confirmed that fish size is a key variable controlling the immobilization threshold and further suggested that the size descriptor best related to immobilization is fish volume. The peak power needed to immobilize fish decreased rapidly with increasing fish volume in small fish but decreased slowly for fish larger than 75-100 cm 3. Furthermore, when we controlled for size and shape, different waveforms did not favor particular species, possibly because of the overwhelming effect of body size. Many of the immobilization inconsistencies previously attributed to species might simply represent the effect of disparities in body size.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4162343','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4162343"><span>Regression Discontinuity Designs in Epidemiology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till</p> <p>2014-01-01</p> <p>When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21844171','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21844171"><span>Evaluation of teledermatology adoption by health-care professionals using a modified Technology Acceptance Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Orruño, Estibalitz; Gagnon, Marie Pierre; Asua, José; Ben Abdeljelil, Anis</p> <p>2011-01-01</p> <p>We examined the main factors affecting the intention of physicians to use teledermatology using a modified Technology Acceptance Model (TAM). The investigation was carried out during a teledermatology pilot study conducted in Spain. A total of 276 questionnaires were sent to physicians by email and 171 responded (62%). Cronbach's alpha was acceptably high for all constructs. Theoretical variables were well correlated with each other and with the dependent variable (Intention to Use). Logistic regression indicated that the original TAM model was good at predicting physicians' intention to use teledermatology and that the variables Perceived Usefulness and Perceived Ease of Use were both significant (odds ratios of 8.4 and 7.4, respectively). When other theoretical variables were added, the model was still significant and it also became more powerful. However, the only significant predictor in the modified model was Facilitators with an odds ratio of 9.9. Thus the TAM was good at predicting physicians' intention to use teledermatology. However, the most important variable was the perception of Facilitators to using the technology (e.g. infrastructure, training and support).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3360833','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3360833"><span>Evaluation of viral load thresholds for predicting new WHO Stage 3 and 4 events in HIV-infected children receiving highly active antiretroviral therapy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Siberry, George K; Harris, D. Robert; Oliveira, Ricardo Hugo; Krauss, Margot R.; Hofer, Cristina B.; Tiraboschi, Adriana Aparecida; Marques, Heloisa; Succi, Regina C.; Abreu, Thalita; Negra, Marinella Della; Mofenson, Lynne M.; Hazra, Rohan</p> <p>2012-01-01</p> <p>Background This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly-active antiretroviral therapy (HAART). Methods Cox proportional hazards modeling was used to assess the adjusted risk of World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART ≥ 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies/mL, with model fit evaluated on the basis of the minimum Akaike Information Criterion (AIC) value, a standard model fit statistic. Results Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cutpoints of > 2600 copies/mL and > 32,000 copies/mL corresponded to the lowest AIC values and were associated with the highest hazard ratios [2.0 (p = 0.015) and 2.1 (p = 0.0058), respectively] for WHO events. Conclusions In HIV-infected Latin American children on stable HAART, two distinct VL thresholds (> 2,600 copies/mL and > 32,000 copies/mL) were identified for predicting children at significantly increased risk of HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors. PMID:22343177</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26575369','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26575369"><span>Comparison of Various Anthropometric Indices as Risk Factors for Hearing Impairment in Asian Women.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kang, Seok Hui; Jung, Da Jung; Lee, Kyu Yup; Choi, Eun Woo; Do, Jun Young</p> <p>2015-01-01</p> <p>The objective of the present study was to examine the associations between various anthropometric measures and metabolic syndrome and hearing impairment in Asian women. We identified 11,755 women who underwent voluntary routine health checkups at Yeungnam University Hospital between June 2008 and April 2014. Among these patients, 2,485 participants were <40 years old, and 1,072 participants lacked information regarding their laboratory findings or hearing and were therefore excluded. In total 8,198 participants were recruited into our study. The AUROC value for metabolic syndrome was 0.790 for the waist to hip ratio (WHR). The cutoff value was 0.939. The sensitivity and specificity for predicting metabolic syndrome were 72.7% and 71.7%, respectively. The AUROC value for hearing loss was 0.758 for WHR. The cutoff value was 0.932. The sensitivity and specificity for predicting hearing loss were 65.8% and 73.4%, respectively. The WHR had the highest AUC and was the best predictor of metabolic syndrome and hearing loss. Univariate and multivariate linear regression analyses showed that WHR levels were positively associated with four hearing thresholds including averaged hearing threshold and low, middle, and high frequency thresholds. In addition, multivariate logistic analysis revealed that those with a high WHR had a 1.347-fold increased risk of hearing loss compared with the participants with a low WHR. Our results demonstrated that WHR may be a surrogate marker for predicting the risk of hearing loss resulting from metabolic syndrome.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29452931','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29452931"><span>Identification of novel uncertainty factors and thresholds of toxicological concern for health hazard and risk assessment: Application to cleaning product ingredients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W</p> <p>2018-04-01</p> <p>Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19770003158&hterms=statistics+levels&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dstatistics%2Blevels','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19770003158&hterms=statistics+levels&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dstatistics%2Blevels"><span>Three-level sampler having automated thresholds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jurgens, R. F.</p> <p>1976-01-01</p> <p>A three-level sampler is described that has its thresholds controlled automatically so as to track changes in the statistics of the random process being sampled. In particular, the mean value is removed and the ratio of the standard deviation of the random process to the threshold is maintained constant. The system is configured in such a manner that slow drifts in the level comparators and digital-to-analog converters are also removed. The ratio of the standard deviation to threshold level may be chosen within the constraints of the ratios of two integers N and M. These may be chosen to minimize the quantizing noise of the sampled process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8389I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8389I"><span>Predicting Fog in the Nocturnal Boundary Layer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Izett, Jonathan; van de Wiel, Bas; Baas, Peter; van der Linden, Steven; van Hooft, Antoon; Bosveld, Fred</p> <p>2017-04-01</p> <p>Fog is a global phenomenon that presents a hazard to navigation and human safety, resulting in significant economic impacts for air and shipping industries as well as causing numerous road traffic accidents. Accurate prediction of fog events, however, remains elusive both in terms of timing and occurrence itself. Statistical methods based on set threshold criteria for key variables such as wind speed have been developed, but high rates of correct prediction of fog events still lead to similarly high "false alarms" when the conditions appear favourable, but no fog forms. Using data from the CESAR meteorological observatory in the Netherlands, we analyze specific cases and perform statistical analyses of event climatology, in order to identify the necessary conditions for correct prediction of fog. We also identify potential "missing ingredients" in current analysis that could help to reduce the number of false alarms. New variables considered include the indicators of boundary layer stability, as well as the presence of aerosols conducive to droplet formation. The poster presents initial findings of new research as well as plans for continued research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19367649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19367649"><span>Influence of intrinsic noise generated by a thermotesting device on thermal sensory detection and thermal pain detection thresholds.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pavlaković, G; Züchner, K; Zapf, A; Bachmann, C G; Graf, B M; Crozier, T A; Pavlaković, H</p> <p>2009-08-01</p> <p>Various factors can influence thermal perception threshold measurements and contribute significantly to unwanted variability of the tests. To minimize this variability, testing should be performed under strictly controlled conditions. Identifying the factors that increase the variability and eliminating their influence should increase reliability and reproducibility. Currently available thermotesting devices use a water-cooling system that generates a continuous noise of approximately 60 dB. In order to analyze whether this noise could influence the thermal threshold measurements we compared the thresholds obtained with a silent thermotesting device to those obtained with a commercially available device. The subjects were tested with one randomly chosen device on 1 day and with the other device 7 days later. At each session, heat, heat pain, cold, and cold pain thresholds were determined with three measurements. Bland-Altman analysis was used to assess agreement in measurements obtained with different devices and it was shown that the intersubject variability of the thresholds obtained with the two devices was comparable for all four thresholds tested. In contrast, the intrasubject variability of the thresholds for heat, heat pain, and cold pain detection was significantly lower with the silent device. Our results show that thermal sensory thresholds measured with the two devices are comparable. However, our data suggest that, for studies with repeated measurements on the same subjects, a silent thermotesting device may allow detection of smaller differences in the treatment effects and/or may permit the use of a smaller number of tested subjects. Muscle Nerve 40: 257-263, 2009.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20090533','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20090533"><span>Identifying cochlear implant channels with poor electrode-neuron interface: partial tripolar, single-channel thresholds and psychophysical tuning curves.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bierer, Julie Arenberg; Faulkner, Kathleen F</p> <p>2010-04-01</p> <p>The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, and tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve (PTC) measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar (pTP) electrode configuration are predictive of wide or tip-shifted PTCs. Data were collected from five cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp., Sylmar, CA). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the pTP configuration for which a fraction of current (sigma) from a center-active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked PTCs were obtained for channels with the highest, lowest, and median tripolar (sigma = 1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (sigma = 0) or a more focused pTP (sigma > or = 0.55) configuration. The masker channel and level were varied, whereas the configuration was fixed to sigma = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, sigma, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, pTP stimulus, had significantly broader PTCs than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with pTP probes for both the highest and lowest threshold channels. These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70036878','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70036878"><span>Structured decision making as a conceptual framework to identify thresholds for conservation and management</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.</p> <p>2009-01-01</p> <p>Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26317082','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26317082"><span>Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Acelam, Philip A</p> <p>2015-01-01</p> <p>To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R (2)=0.01; P=0.20) and obesity (R (2)=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R (2)=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540172','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540172"><span>Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Acelam, Philip A</p> <p>2015-01-01</p> <p>Objective To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. Materials and methods In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. Results The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R2=0.01; P=0.20) and obesity (R2=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. Conclusion There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R2=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations. PMID:26317082</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12927620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12927620"><span>Facial arthralgia and myalgia: can they be differentiated by trigeminal sensory assessment?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eliav, Eli; Teich, Sorin; Nitzan, Dorit; El Raziq, Daood Abid; Nahlieli, Oded; Tal, Michael; Gracely, Richard H; Benoliel, Rafael</p> <p>2003-08-01</p> <p>Heat and electrical detection thresholds were assessed in 72 patients suffering from painful temporomandibular disorder. Employing widely accepted criteria, 44 patients were classified as suffering from temporomandibular joint (TMJ) arthralgia (i.e. pain originating from the TMJ) and 28 from myalgia (i.e. pain originating from the muscles of mastication). Electrical stimulation was employed to assess thresholds in large myelinated nerve fibers (Abeta) and heat application to assess thresholds in unmyelinated nerve fibers (C). The sensory tests were performed bilaterally in three trigeminal nerve sites: the auriculotemporal nerve territory (AUT), buccal nerve territory (BUC) and the mental nerve territory (MNT). In addition, 22 healthy asymptomatic controls were examined. A subset of ten arthralgia patients underwent arthrocentesis and electrical detection thresholds were additionally assessed following the procedure. Electrical detection threshold ratios were calculated by dividing the affected side by the control side, thus reduced ratios indicate hypersensitivity of the affected side. In control patients, ratios obtained at all sites did not vary significantly from the expected value of 'one' (mean with 95% confidence intervals; AUT, 1:0.95-1.06; BUC, 1.01:0.93-1.11; MNT, 0.97:0.88-1.05, all areas one sample analysis P>0.05). In arthralgia patients mean ratios (+/-SEM) obtained for the AUT territory (0.63+/-0.03) were significantly lower compared to ratios for the MNT (1.02+/-0.03) and BUC (0.96+/-0.04) territories (repeated measures analysis of variance (RANOVA), P<0.0001) and compared to the AUT ratios in myalgia (1.27+/-0.09) and control subjects (1+/-0.06, ANOVA, P<0.0001). In the myalgia group the electrical detection threshold ratios in the AUT territory were significantly elevated compared to the AUT ratios in control subjects (Dunnett test, P<0.05), but only approached statistical significance compared to the MNT (1.07+/-0.04) and BUC (1.11+/-0.06) territories (RANOVA, F(2,27)=3.12, P=0.052). There were no significant differences between and within the groups for electrical detection threshold ratios in the BUC and MNT nerve territories, and for the heat detection thresholds in all tested sites. Following arthrocentesis, mean electrical detection threshold ratios in the AUT territory were significantly elevated from 0.64+/-0.06 to 0.99+/-0.04 indicating resolution of the hypersensitivity (paired t-test, P=0.001). In conclusion, large myelinated fiber hypersensitivity is found in the skin overlying TMJs with clinical pain and pathology but is not found in controls. In patients with muscle-related facial pain there was significant elevation of the electrical detection threshold in the AUT region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/38653','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/38653"><span>Prediction of height increment for models of forest growth</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Albert R. Stage</p> <p>1975-01-01</p> <p>Functional forms of equations were derived for predicting 10-year periodic height increment of forest trees from height, diameter, diameter increment, and habitat type. Crown ratio was considered as an additional variable for prediction, but its contribution was negligible. Coefficients of the function were estimated for 10 species of trees growing in 10 habitat types...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www-oh.er.usgs.gov/reports/pdf.rpts/wrir.98-4238.pdf','USGSPUBS'); return false;" href="http://www-oh.er.usgs.gov/reports/pdf.rpts/wrir.98-4238.pdf"><span>Factors related to the joint probability of flooding on paired streams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Koltun, G.F.; Sherwood, J.M.</p> <p>1998-01-01</p> <p>The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160012543','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160012543"><span>Evaluating Alerting and Guidance Performance of a UAS Detect-And-Avoid System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, Seung Man; Park, Chunki; Thipphavong, David P.; Isaacson, Douglas R.; Santiago, Confesor</p> <p>2016-01-01</p> <p>A key challenge to the routine, safe operation of unmanned aircraft systems (UAS) is the development of detect-and-avoid (DAA) systems to aid the UAS pilot in remaining "well clear" of nearby aircraft. The goal of this study is to investigate the effect of alerting criteria and pilot response delay on the safety and performance of UAS DAA systems in the context of routine civil UAS operations in the National Airspace System (NAS). A NAS-wide fast-time simulation study was conducted to assess UAS DAA system performance with a large number of encounters and a broad set of DAA alerting and guidance system parameters. Three attributes of the DAA system were controlled as independent variables in the study to conduct trade-off analyses: UAS trajectory prediction method (dead-reckoning vs. intent-based), alerting time threshold (related to predicted time to LoWC), and alerting distance threshold (related to predicted Horizontal Miss Distance, or HMD). A set of metrics, such as the percentage of true positive, false positive, and missed alerts, based on signal detection theory and analysis methods utilizing the Receiver Operating Characteristic (ROC) curves were proposed to evaluate the safety and performance of DAA alerting and guidance systems and aid development of DAA system performance standards. The effect of pilot response delay on the performance of DAA systems was evaluated using a DAA alerting and guidance model and a pilot model developed to support this study. A total of 18 fast-time simulations were conducted with nine different DAA alerting threshold settings and two different trajectory prediction methods, using recorded radar traffic from current Visual Flight Rules (VFR) operations, and supplemented with DAA-equipped UAS traffic based on mission profiles modeling future UAS operations. Results indicate DAA alerting distance threshold has a greater effect on DAA system performance than DAA alerting time threshold or ownship trajectory prediction method. Further analysis on the alert lead time (time in advance of predicted loss of well clear at which a DAA alert is first issued) indicated a strong positive correlation between alert lead time and DAA system performance (i.e. the ability of the UAS pilot to maneuver the unmanned aircraft to remain well clear). While bigger distance thresholds had beneficial effects on alert lead time and missed alert rate, it also generated a higher rate of false alerts. In the design and development of DAA alerting and guidance systems, therefore, the positive and negative effects of false alerts and missed alerts should be carefully considered to achieve acceptable alerting system performance by balancing false and missed alerts. The results and methodology presented in this study are expected to help stakeholders, policymakers and standards committees define the appropriate setting of DAA system parameter thresholds for UAS that ensure safety while minimizing operational impacts to the NAS and equipage requirements for its users before DAA operational performance standards can be finalized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMPP51C1091C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMPP51C1091C"><span>Assessment of the forecast skill of spring onset in the NMME experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carrillo, C. M.; Ault, T.</p> <p>2017-12-01</p> <p>This study assesses the predictability of spring onset using an index of its interannual variability. We use the North American Multi-Model Ensemble (NMME) experiment to assess this predictability. The input dataset to compute spring onset index, SI-x, were treated with a daily joint bias correction (JBC) approach, and the SI-x outputs were post-processed using three ensemble model output statistic (EMOS) approaches—logistic regression, Gaussian Ensemble Dressing, and non-homogeneous Gaussian regression. These EMOS approaches quantify the effect of training period length and ensemble size on forecast skill. The highest range of predictability for the timing spring onset is from 10 to 60 days, and it is located along a narrow band between 35° to 45°N in the US. Using rank probability scores based on quantiles (q), a forecast threshold (q) of 0.5 provides a range of predictability that falls into two categories 10-40 and 40-60 days, which seems to represent the effect of the intra-seasonal scale. Using higher thresholds (q=0.6 and 0.7) predictability shows lower range with values around 10-30 days. The post-processing work using JBC improves the predictability skill by 13% from uncorrected results. Using EMOS, a significant positive change in the skill score is noted in regions where the skill with JBC shows evidence of improvement. The consensus of these techniques shows that regions of better predictability can be expanded.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23223822','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23223822"><span>Empirical relationships among oliguria, creatinine, mortality, and renal replacement therapy in the critically ill.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mandelbaum, Tal; Lee, Joon; Scott, Daniel J; Mark, Roger G; Malhotra, Atul; Howell, Michael D; Talmor, Daniel</p> <p>2013-03-01</p> <p>The observation periods and thresholds of serum creatinine and urine output defined in the Acute Kidney Injury Network (AKIN) criteria were not empirically derived. By continuously varying creatinine/urine output thresholds as well as the observation period, we sought to investigate the empirical relationships among creatinine, oliguria, in-hospital mortality, and receipt of renal replacement therapy (RRT). Using a high-resolution database (Multiparameter Intelligent Monitoring in Intensive Care II), we extracted data from 17,227 critically ill patients with an in-hospital mortality rate of 10.9 %. The 14,526 patients had urine output measurements. Various combinations of creatinine/urine output thresholds and observation periods were investigated by building multivariate logistic regression models for in-hospital mortality and RRT predictions. For creatinine, both absolute and percentage increases were analyzed. To visualize the dependence of adjusted mortality and RRT rate on creatinine, the urine output, and the observation period, we generated contour plots. Mortality risk was high when absolute creatinine increase was high regardless of the observation period, when percentage creatinine increase was high and the observation period was long, and when oliguria was sustained for a long period of time. Similar contour patterns emerged for RRT. The variability in predictive accuracy was small across different combinations of thresholds and observation periods. The contour plots presented in this article complement the AKIN definition. A multi-center study should confirm the universal validity of the results presented in this article.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29925969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29925969"><span>Observations of the missing baryons in the warm-hot intergalactic medium.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nicastro, F; Kaastra, J; Krongold, Y; Borgani, S; Branchini, E; Cen, R; Dadina, M; Danforth, C W; Elvis, M; Fiore, F; Gupta, A; Mathur, S; Mayya, D; Paerels, F; Piro, L; Rosa-Gonzalez, D; Schaye, J; Shull, J M; Torres-Zafra, J; Wijers, N; Zappacosta, L</p> <p>2018-06-01</p> <p>It has been known for decades that the observed number of baryons in the local Universe falls about 30-40 per cent short 1,2 of the total number of baryons predicted 3 by Big Bang nucleosynthesis, as inferred 4,5 from density fluctuations of the cosmic microwave background and seen during the first 2-3 billion years of the Universe in the so-called 'Lyman α forest' 6,7 (a dense series of intervening H I Lyman α absorption lines in the optical spectra of background quasars). A theoretical solution to this paradox locates the missing baryons in the hot and tenuous filamentary gas between galaxies, known as the warm-hot intergalactic medium. However, it is difficult to detect them there because the largest by far constituent of this gas-hydrogen-is mostly ionized and therefore almost invisible in far-ultraviolet spectra with typical signal-to-noise ratios 8,9 . Indeed, despite large observational efforts, only a few marginal claims of detection have been made so far 2,10 . Here we report observations of two absorbers of highly ionized oxygen (O VII) in the high-signal-to-noise-ratio X-ray spectrum of a quasar at a redshift higher than 0.4. These absorbers show no variability over a two-year timescale and have no associated cold absorption, making the assumption that they originate from the quasar's intrinsic outflow or the host galaxy's interstellar medium implausible. The O VII systems lie in regions characterized by large (four times larger than average 11 ) galaxy overdensities and their number (down to the sensitivity threshold of our data) agrees well with numerical simulation predictions for the long-sought warm-hot intergalactic medium. We conclude that the missing baryons have been found.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4302790','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4302790"><span>Emotional exhaustion and workload predict clinician-rated and objective patient safety</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Welp, Annalena; Meier, Laurenz L.; Manser, Tanja</p> <p>2015-01-01</p> <p>Aims: To investigate the role of clinician burnout, demographic, and organizational characteristics in predicting subjective and objective indicators of patient safety. Background: Maintaining clinician health and ensuring safe patient care are important goals for hospitals. While these goals are not independent from each other, the interplay between clinician psychological health, demographic and organizational variables, and objective patient safety indicators is poorly understood. The present study addresses this gap. Method: Participants were 1425 physicians and nurses working in intensive care. Regression analysis (multilevel) was used to investigate the effect of burnout as an indicator of psychological health, demographic (e.g., professional role and experience) and organizational (e.g., workload, predictability) characteristics on standardized mortality ratios, length of stay and clinician-rated patient safety. Results: Clinician-rated patient safety was associated with burnout, trainee status, and professional role. Mortality was predicted by emotional exhaustion. Length of stay was predicted by workload. Contrary to our expectations, burnout did not predict length of stay, and workload and predictability did not predict standardized mortality ratios. Conclusion: At least in the short-term, clinicians seem to be able to maintain safety despite high workload and low predictability. Nevertheless, burnout poses a safety risk. Subjectively, burnt-out clinicians rated safety lower, and objectively, units with high emotional exhaustion had higher standardized mortality ratios. In summary, our results indicate that clinician psychological health and patient safety could be managed simultaneously. Further research needs to establish causal relationships between these variables and support to the development of managerial guidelines to ensure clinicians’ psychological health and patients’ safety. PMID:25657627</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27648770','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27648770"><span>Prehospital shock index and pulse pressure/heart rate ratio to predict massive transfusion after severe trauma: Retrospective analysis of a large regional trauma database.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pottecher, Julien; Ageron, François-Xavier; Fauché, Clémence; Chemla, Denis; Noll, Eric; Duranteau, Jacques; Chapiteau, Laurent; Payen, Jean-François; Bouzat, Pierre</p> <p>2016-10-01</p> <p>Early and accurate detection of severe hemorrhage is critical for a timely trigger of massive transfusion (MT). Hemodynamic indices combining heart rate (HR) and either systolic (shock index [SI]) or pulse pressure (PP) (PP/HR ratio) have been shown to track blood loss during hemorrhage. The present study assessed the accuracy of prehospital SI and PP/HR ratio to predict subsequent MT, using the gray-zone approach. This was a retrospective analysis (January 1, 2009, to December 31, 2011) of a prospectively developed trauma registry (TRENAU), in which the triage scheme combines patient severity and hospital facilities. Thresholds for MT were defined as either classic (≥10 red blood cell units within the first 24 hours [MT1]) or critical (≥3 red blood cells within the first hour [MT2]). The receiver operating characteristic curves and gray zones were defined for SI and PP/HR ratio to predict MT1 and MT2 and faced with initial triage scheme. The TRENAU registry included 3,689 trauma patients, of which 2,557 had complete chart recovery and 176 (6.9%) required MT. In the whole population, PP/HR ratio and SI moderately and similarly predicted MT1 (area under the receiver operating characteristic curve, 0.77 [95% confidence interval {CI}, 0.70-0.84] and 0.80 [95% CI, 0.74-0.87], respectively, p = 0.064) and MT2 (0.71 [95% CI, 0.67-0.76] and 0.72 [95% CI, 0.68-0.77], respectively, p = 0.48). The proportions of patients in the gray zone for PP/HR ratio and SI were 61% versus 40%, respectively, to predict MT1 (p < 0.001) and 62% versus 71%, respectively, to predict MT2 (p < 0.001). In the least severe patient, both indices had fair accuracy to predict MT1 (0.91 [95% CI, 0.82-1.00] vs. 0.87 [95% CI, 0.79-1.00]; p = 0.638), and PP/HR ratio outperformed SI to predict MT2 (0.72 [95% CI, 0.59-0.84] vs. 0.54 [95% CI, 0.33-0.74]; p < 0.015). In an unselected trauma population, prehospital SI and PP/HR ratio were moderately accurate in predicting MT. In the seemingly least severe patients, an improvement of prehospital undertriage for MT may be gained by using the PP/HR ratio. Epidemiolgic study, level III.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ECSS...98...60D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ECSS...98...60D"><span>Dependence of RNA:DNA ratios and Fulton’s K condition indices on environmental characteristics of plaice and dab nursery grounds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>De Raedemaecker, F.; Brophy, D.; O'Connor, I.; O'Neill, B.</p> <p>2012-02-01</p> <p>This field study showed a lack of a correlation between a morphometric (Fulton's K) and biochemical (RNA:DNA ratio) condition index in juvenile plaice ( Pleuronectes platessa) and dab ( Limanda limanda) studied to assess habitat quality in four sandy beach nursery grounds in Galway Bay, Ireland. Based on monthly surveys from June to September in 2008 and 2009, fish growth, indicated by RNA:DNA ratios and Fulton's K, displayed considerable spatio-temporal variability. Site-related patterns in Fulton's K for plaice and dab were consistent between years whereas RNA:DNA ratios displayed annual and interspecific variability among nursery habitats. This indicates a higher sensitivity of RNA:DNA ratios to short-term environmental fluctuations which is not apparent in Fulton's K measurements of juvenile flatfish. Generalized Additive Modelling (GAM) revealed non-linear relationships between the condition indices and (biotic and abiotic) habitat characteristics as well as diet features, derived from gut content analyses. Density of predators, sediment grain size and salinity were the most important predictors of both condition indices. Temperature also affected condition indices in dab whereas plaice condition indices varied with depth. Diet features did not contribute to the explained variability in the models predicting RNA:DNA ratios whereas certain prey groups significantly improved the explained variability in the models predicting Fulton's K of plaice and dab. The value of both indices for assessing fish condition and habitat quality in field studies is discussed. These findings aid understanding of the biological and physical mechanisms promoting fast growth and high survival which will help to identify high quality nursery areas for juvenile plaice and dab.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910051296&hterms=1041&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3D%2526%25231041','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910051296&hterms=1041&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3D%2526%25231041"><span>Precluding nonlinear ISI in direct detection long-haul fiber optic systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Swenson, Norman L.; Shoop, Barry L.; Cioffi, John M.</p> <p>1991-01-01</p> <p>Long-distance, high-rate fiber optic systems employing directly modulated 1.55-micron single-mode lasers and conventional single-mode fiber suffer severe intersymbol interference (ISI) with a large nonlinear component. A method of reducing the nonlinearity of the ISI, thereby making linear equalization more viable, is investigated. It is shown that the degree of nonlinearity is highly dependent on the choice of laser bias current, and that in some cases the ISI nonlinearity can be significantly reduced by biasing the laser substantially above threshold. Simulation results predict that an increase in signal-to-nonlinear-distortion ratio as high as 25 dB can be achieved for synchronously spaced samples at an optimal sampling phase by increasing the bias current from 1.2 times threshold to 3.5 times threshold. The high SDR indicates that a linear tapped delay line equalizer could be used to mitigate ISI. Furthermore, the shape of the pulse response suggests that partial response precoding and digital feedback equalization would be particularly effective for this channel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24517951','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24517951"><span>[The analysis of threshold effect using Empower Stats software].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan</p> <p>2013-11-01</p> <p>In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1337241-derivation-groundwater-threshold-values-analysis-impacts-predicted-potential-carbon-sequestration-sites','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1337241-derivation-groundwater-threshold-values-analysis-impacts-predicted-potential-carbon-sequestration-sites"><span>Derivation of groundwater threshold values for analysis of impacts predicted at potential carbon sequestration sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Last, G. V.; Murray, C. J.; Bott, Y.</p> <p>2016-06-01</p> <p>The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts to groundwater quality due to carbon dioxide (CO 2) or brine leakage, should it occur from deep CO 2 storage reservoirs. These efforts targeted two classes of aquifer – an unconfined fractured carbonate aquifer based on the Edwards Aquifer in Texas, and a confined alluvium aquifer based on the High Plains Aquifer in Kansas. Hypothetical leakage scenarios focus on wellbores as the most likely conduits from the storage reservoir to an underground source of drinking water (USDW). To facilitate evaluationmore » of potential degradation of the USDWs, threshold values, below which there would be no predicted impacts, were determined for each of these two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Results demonstrate the importance of establishing baseline groundwater quality conditions that capture the spatial and temporal variability of the USDWs prior to CO 2 injection and storage.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27742636','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27742636"><span>Drug Concentration Thresholds Predictive of Therapy Failure and Death in Children With Tuberculosis: Bread Crumb Trails in Random Forests.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Swaminathan, Soumya; Pasipanodya, Jotam G; Ramachandran, Geetha; Hemanth Kumar, A K; Srivastava, Shashikant; Deshpande, Devyani; Nuermberger, Eric; Gumbo, Tawanda</p> <p>2016-11-01</p> <p> The role of drug concentrations in clinical outcomes in children with tuberculosis is unclear. Target concentrations for dose optimization are unknown.  Plasma drug concentrations measured in Indian children with tuberculosis were modeled using compartmental pharmacokinetic analyses. The children were followed until end of therapy to ascertain therapy failure or death. An ensemble of artificial intelligence algorithms, including random forests, was used to identify predictors of clinical outcome from among 30 clinical, laboratory, and pharmacokinetic variables.  Among the 143 children with known outcomes, there was high between-child variability of isoniazid, rifampin, and pyrazinamide concentrations: 110 (77%) completed therapy, 24 (17%) failed therapy, and 9 (6%) died. The main predictors of therapy failure or death were a pyrazinamide peak concentration <38.10 mg/L and rifampin peak concentration <3.01 mg/L. The relative risk of these poor outcomes below these peak concentration thresholds was 3.64 (95% confidence interval [CI], 2.28-5.83). Isoniazid had concentration-dependent antagonism with rifampin and pyrazinamide, with an adjusted odds ratio for therapy failure of 3.00 (95% CI, 2.08-4.33) in antagonism concentration range. In regard to death alone as an outcome, the same drug concentrations, plus z scores (indicators of malnutrition), and age <3 years, were highly ranked predictors. In children <3 years old, isoniazid 0- to 24-hour area under the concentration-time curve <11.95 mg/L × hour and/or rifampin peak <3.10 mg/L were the best predictors of therapy failure, with relative risk of 3.43 (95% CI, .99-11.82).  We have identified new antibiotic target concentrations, which are potential biomarkers associated with treatment failure and death in children with tuberculosis. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70184330','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70184330"><span>Predicting arsenic in drinking water wells of the Central Valley, California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Ayotte, Joseph; Nolan, Bernard T.; Gronberg, JoAnn M.</p> <p>2016-01-01</p> <p>Probabilities of arsenic in groundwater at depths used for domestic and public supply in the Central Valley of California are predicted using weak-learner ensemble models (boosted regression trees, BRT) and more traditional linear models (logistic regression, LR). Both methods captured major processes that affect arsenic concentrations, such as the chemical evolution of groundwater, redox differences, and the influence of aquifer geochemistry. Inferred flow-path length was the most important variable but near-surface-aquifer geochemical data also were significant. A unique feature of this study was that previously predicted nitrate concentrations in three dimensions were themselves predictive of arsenic and indicated an important redox effect at >10 μg/L, indicating low arsenic where nitrate was high. Additionally, a variable representing three-dimensional aquifer texture from the Central Valley Hydrologic Model was an important predictor, indicating high arsenic associated with fine-grained aquifer sediment. BRT outperformed LR at the 5 μg/L threshold in all five predictive performance measures and at 10 μg/L in four out of five measures. BRT yielded higher prediction sensitivity (39%) than LR (18%) at the 10 μg/L threshold–a useful outcome because a major objective of the modeling was to improve our ability to predict high arsenic areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2616938','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2616938"><span>Comparison of correlated correlations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cohen, A</p> <p>1989-12-01</p> <p>We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3221548','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3221548"><span>Predictive genetic testing for the identification of high-risk groups: a simulation study on the impact of predictive ability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2011-01-01</p> <p>Background Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models. Methods We assessed sensitivity, specificity, and positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants. Results We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures. Conclusions The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC. PMID:21797996</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4039846','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4039846"><span>Ethnicity and the association between anthropometric indices of obesity and cardiovascular risk in women: a cross-sectional study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goh, Louise G H; Dhaliwal, Satvinder S; Welborn, Timothy A; Lee, Andy H; Della, Phillip R</p> <p>2014-01-01</p> <p>Objectives The objectives of this study were to determine whether the cross-sectional associations between anthropometric obesity measures, body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR), and calculated 10-year cardiovascular disease (CVD) risk using the Framingham and general CVD risk score models, are the same for women of Australian, UK and Ireland, North European, South European and Asian descent. This study would investigate which anthropometric obesity measure is most predictive at identifying women at increased CVD risk in each ethnic group. Design Cross-sectional data from the National Heart Foundation Risk Factor Prevalence Study. Setting Population-based survey in Australia. Participants 4354 women aged 20–69 years with no history of heart disease, diabetes or stroke. Most participants were of Australian, UK and Ireland, North European, South European or Asian descent (97%). Outcome measures Anthropometric obesity measures that demonstrated stronger predictive ability of identifying women at increased CVD risk and likelihood of being above the promulgated treatment thresholds of various risk score models. Results Central obesity measures, WC and WHR, were better predictors of cardiovascular risk. WHR reported a stronger predictive ability than WC and BMI in Caucasian women. In Northern European women, BMI was a better indicator of risk using the general CVD (10% threshold) and Framingham (20% threshold) risk score models. WC was the most predictive of cardiovascular risk among Asian women. Conclusions Ethnicity should be incorporated into CVD assessment. The same anthropometric obesity measure cannot be used across all ethnic groups. Ethnic-specific CVD prevention and treatment strategies need to be further developed. PMID:24852299</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996AIPC..369..900Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996AIPC..369..900Y"><span>Application of nuclear pumped laser to an optical self-powered neutron detector</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamanaka, N.; Takahashi, H.; Iguchi, T.; Nakazawa, M.; Kakuta, T.; Yamagishi, H.; Katagiri, M.</p> <p>1996-05-01</p> <p>A Nuclear Pumped Laser (NPL) using 3He/Ne/Ar gas mixture is investigated for a purpose of applying to an optical self-powered neutron detector. Reactor experiments and simulations on lasing mechanism have been made to estimate the best gas pressure and mixture ratios on the threshold input power density (or thermal neutron flux) in 3He/Ne/Ar mixture. Calculational results show that the best mixture pressure is 3He/Ne/Ar=2280/60/100 Torr and thermal neutron flux threshold 5×1012 n/cm2 sec, while the reactor experiments made in the research reactor ``YAYOI'' of the University of Tokyo and ``JRR-4'' of JAERI also demonstrate that excitational efficiency is maximized in a similar gas mixture predicted by the calculation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9696E..0XZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9696E..0XZ"><span>A standardized model for predicting flap failure using indocyanine green dye</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.</p> <p>2016-03-01</p> <p>Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28429825','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28429825"><span>Validation of the MDS research criteria for prodromal Parkinson's disease: Longitudinal assessment in a REM sleep behavior disorder (RBD) cohort.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fereshtehnejad, Seyed-Mohammad; Montplaisir, Jacques Y; Pelletier, Amelie; Gagnon, Jean-François; Berg, Daniela; Postuma, Ronald B</p> <p>2017-06-01</p> <p>Recently, the International Parkinson and Movement Disorder Society introduced the prodromal criteria for PD. Objectives Our study aimed to examine diagnostic accuracy of the criteria as well as the independence of prodromal markers to predict conversion to PD or dementia with Lewy bodies. This prospective cohort study was performed on 121 individuals with rapid eye movement sleep behavior disorder who were followed annually for 1 to 12 years. Using data from a comprehensive panel of prodromal markers, likelihood ratio and post-test probability of the criteria were calculated at baseline and during each follow-up visit. Forty-eight (39.7%) individuals with rapid eye movement sleep behavior disorder converted to PD/dementia with Lewy bodies. The prodromal criteria had 81.3% sensitivity and 67.9% specificity for conversion to PD/dementia with Lewy bodies at 4-year follow-up. One year before conversion, sensitivity was 100%. The criteria predicted dementia with Lewy bodies with even higher accuracy than PD without dementia at onset. Those who met the threshold of prodromal criteria at baseline had significantly more rapid conversion into a neurodegenerative state (4.8 vs. 9.1 years; P < 0.001). Pair-wise combinations of different prodromal markers showed that markers were independent of one another. The prodromal criteria are a promising tool for predicting incidence of PD/dementia with Lewy bodies and conversion time in a rapid eye movement sleep behavior disorder cohort, with high sensitivity and high specificity with long follow-up. Prodromal markers influence the overall likelihood ratio independently, allowing them to be reliably multiplied. Defining additional markers with high likelihood ratio, further studies with longitudinal assessment and testing thresholds in different target populations will improve the criteria. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21262168-investigation-critical-equivalence-ratio-chemical-speciation-flames-ethylbenzene-ethanol-blends','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21262168-investigation-critical-equivalence-ratio-chemical-speciation-flames-ethylbenzene-ethanol-blends"><span>Investigation of critical equivalence ratio and chemical speciation in flames of ethylbenzene-ethanol blends</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Therrien, Richard J.; Ergut, Ali; Levendis, Yiannis A.</p> <p></p> <p>This work investigates five different one-dimensional, laminar, atmospheric pressure, premixed ethanol/ethylbenzene flames (0%, 25%, 50%, 75% and 90% ethanol by weight) at their soot onset threshold ({phi}{sub critical}). Liquid ethanol/ethylbenzene mixtures were pre-vaporized in nitrogen, blended with an oxygen-nitrogen mixture and, upon ignition, burned in premixed one-dimensional flames at atmospheric pressure. The flames were controlled so that each was at its visual soot onset threshold, and all had similar temperature profiles (determined by thermocouples). Fixed gases, light volatile hydrocarbons, polycyclic aromatic hydrocarbons (PAH), and oxygenated aromatic hydrocarbons were directly sampled at three locations in each flame. The experimental results weremore » compared with a detailed kinetic model, and the modeling results were used to perform a reaction flux analysis of key species. The critical equivalence ratio was observed to increase in a parabolic fashion as ethanol concentration increased in the fuel mixture. The experimental results showed increasing trends of methane, ethane, and ethylene with increasing concentrations of ethanol in the flames. Carbon monoxide was also seen to increase significantly with the increase of ethanol in the flame, which removes carbon from the PAH and soot formation pathways. The PAH and oxygenated aromatic hydrocarbon values were very similar in the 0%, 25% and 50% ethanol flames, but significantly lower in the 75% and 90% ethanol flames. These results were in general agreement with the model and were reflected by the model soot predictions. The model predicted similar soot profiles for the 0%, 25% and 50% ethanol flames, however it predicted significantly lower values in the 75% and 90% ethanol flames. The reaction flux analysis revealed benzyl to be a major contributor to single and double ring aromatics (i.e., benzene and naphthalene), which was identified in a similar role in nearly sooting or highly sooting ethylbenzene flames. The presence of this radical was significantly reduced as ethanol concentration was increased in the flames, and this effect in combination with the lower carbon to oxygen ratios and the enhanced formation of carbon monoxide, are likely what allowed higher equivalence ratios to be reached without forming soot. (author)« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26456632','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26456632"><span>Improving Hypertension Screening in Childhood Using Modified Blood Pressure to Height Ratio.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dong, Bin; Wang, Zhiqiang; Wang, Hai-Jun; Ma, Jun</p> <p>2016-06-01</p> <p>Blood pressure to height ratio (BPHR) has been suggested as a simple method for screening children with hypertension, but its discriminatory ability in young children is not as good as that in older children. Using data of 89,664 Chinese children aged 7 to 11 years, the authors assessed whether modified BPHR (BP:eHT13) was better than BPHR in identifying young children with hypertension. BP:eHT13 was estimated as BP/(height+7×(13-age in years)). Using Youden's index, the thresholds of systolic/diastolic BP:eHT13 for identifying prehypertension and hypertension were 0.67/0.44 and 0.69/0.45, respectively. These proposed thresholds revealed high sensitivity, specificity, negative predictive value, and area under the curve (AUC), ranging from 0.874 to 0.999. In addition, BP:eHT13 showed better AUCs and fewer cutoff points than, if not similar to, two existing BPHR references. BP:eHT13 generally performed better than BPHR in discriminating BP abnormalities in young children and may improve early hypertension recognition and control. ©2015 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4096545','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4096545"><span>Brachial-to-ankle pulse wave velocity as an independent prognostic factor for ovulatory response to clomiphene citrate in women with polycystic ovary syndrome</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background Polycystic ovary syndrome (PCOS) has a risk for cardiovascular disease. Increased arterial stiffness has been observed in women with PCOS. The purpose of the present study was to investigate whether the brachial-to-ankle pulse wave velocity (baPWV) is a prognostic factor for ovulatory response to clomiphene citrate (CC) in women with PCOS. Methods This study was a retrospective cohort study of 62 women with PCOS conducted from January 2009 to December 2012 at the university hospital, Yamagata, Japan. We analyzed 62 infertile PCOS patients who received CC. Ovulation was induced by 100 mg CC for 5 days. CC non-responder was defined as failure to ovulate for at least 2 consecutive CC-treatment cycles. The endocrine, metabolic, and cardiovascular parameters between CC responder (38 patients) and non-responder (24 patients) groups were analyzed. Results In univariate analysis, waist-to-hip ratio, level of free testosterone, percentages of patients with dyslipidemia, impaired glucose tolerance, and diabetes mellitus, blood glucose and insulin levels at 60 min and 120 min, the area under the curve of glucose and insulin after 75-g oral glucose intolerance test, and baPWV were significantly higher in CC non-responders compared with responders. In multivariate logistic regression analysis, both waist-to-hip ratio (odds ratio, 1.77; 95% confidence interval, 2.2–14.1; P = 0.04) and baPWV (odds ratio, 1.71; 95% confidence interval, 1.1–2.8; P = 0.03) were independent predictors of ovulation induction by CC in PCOS patients. The predictive values of waist-to-hip ratio and baPWV for the CC resistance in PCOS patients were determined by the receiver operating characteristic curves. The area under the curves for waist-to-hip ratio and baPWV were 0.76 and 0.77, respectively. Setting the threshold at 0.83 for waist-to-hip ratio offered the best compromise between specificity (0.65) and sensitivity (0.84), while the setting the threshold at 1,182 cm/s for baPWV offered the best compromise between specificity (0.80) and sensitivity (0.71). Conclusions Both metabolic and cardiovascular parameters were predictive for CC resistance in PCOS patients. The measurement of baPWV may be a useful tool to predict ovulation in PCOS patients who receive CC. PMID:25024746</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70188478','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70188478"><span>Prediction of spatially explicit rainfall intensity–duration thresholds for post-fire debris-flow generation in the western United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.</p> <p>2017-01-01</p> <p>Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Geomo.278..149S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Geomo.278..149S"><span>Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.</p> <p>2017-02-01</p> <p>Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21643542','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21643542"><span>Body composition analysis: Cellular level modeling of body component ratios.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Z; Heymsfield, S B; Pi-Sunyer, F X; Gallagher, D; Pierson, R N</p> <p>2008-01-01</p> <p>During the past two decades, a major outgrowth of efforts by our research group at St. Luke's-Roosevelt Hospital is the development of body composition models that include cellular level models, models based on body component ratios, total body potassium models, multi-component models, and resting energy expenditure-body composition models. This review summarizes these models with emphasis on component ratios that we believe are fundamental to understanding human body composition during growth and development and in response to disease and treatments. In-vivo measurements reveal that in healthy adults some component ratios show minimal variability and are relatively 'stable', for example total body water/fat-free mass and fat-free mass density. These ratios can be effectively applied for developing body composition methods. In contrast, other ratios, such as total body potassium/fat-free mass, are highly variable in vivo and therefore are less useful for developing body composition models. In order to understand the mechanisms governing the variability of these component ratios, we have developed eight cellular level ratio models and from them we derived simplified models that share as a major determining factor the ratio of extracellular to intracellular water ratio (E/I). The E/I value varies widely among adults. Model analysis reveals that the magnitude and variability of each body component ratio can be predicted by correlating the cellular level model with the E/I value. Our approach thus provides new insights into and improved understanding of body composition ratios in adults.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JEI....24f3015A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JEI....24f3015A"><span>Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir</p> <p>2015-11-01</p> <p>The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26261009','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26261009"><span>Development and Validation of a Scoring System to Predict Outcomes of Patients With Primary Biliary Cirrhosis Receiving Ursodeoxycholic Acid Therapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lammers, Willem J; Hirschfield, Gideon M; Corpechot, Christophe; Nevens, Frederik; Lindor, Keith D; Janssen, Harry L A; Floreani, Annarosa; Ponsioen, Cyriel Y; Mayo, Marlyn J; Invernizzi, Pietro; Battezzati, Pier M; Parés, Albert; Burroughs, Andrew K; Mason, Andrew L; Kowdley, Kris V; Kumagi, Teru; Harms, Maren H; Trivedi, Palak J; Poupon, Raoul; Cheung, Angela; Lleo, Ana; Caballeria, Llorenç; Hansen, Bettina E; van Buuren, Henk R</p> <p>2015-12-01</p> <p>Approaches to risk stratification for patients with primary biliary cirrhosis (PBC) are limited, single-center based, and often dichotomous. We aimed to develop and validate a better model for determining prognoses of patients with PBC. We performed an international, multicenter meta-analysis of 4119 patients with PBC treated with ursodeoxycholic acid at liver centers in 8 European and North American countries. Patients were randomly assigned to derivation (n = 2488 [60%]) and validation cohorts (n = 1631 [40%]). A risk score (GLOBE score) to predict transplantation-free survival was developed and validated with univariate and multivariable Cox regression analyses using clinical and biochemical variables obtained after 1 year of ursodeoxycholic acid therapy. Risk score outcomes were compared with the survival of age-, sex-, and calendar time-matched members of the general population. The prognostic ability of the GLOBE score was evaluated alongside those of the Barcelona, Paris-1, Rotterdam, Toronto, and Paris-2 criteria. Age (hazard ratio = 1.05; 95% confidence interval [CI]: 1.04-1.06; P < .0001); levels of bilirubin (hazard ratio = 2.56; 95% CI: 2.22-2.95; P < .0001), albumin (hazard ratio = 0.10; 95% CI: 0.05-0.24; P < .0001), and alkaline phosphatase (hazard ratio = 1.40; 95% CI: 1.18-1.67; P = .0002); and platelet count (hazard ratio/10 units decrease = 0.97; 95% CI: 0.96-0.99; P < .0001) were all independently associated with death or liver transplantation (C-statistic derivation, 0.81; 95% CI: 0.79-0.83, and validation cohort, 0.82; 95% CI: 0.79-0.84). Patients with risk scores >0.30 had significantly shorter times of transplant-free survival than matched healthy individuals (P < .0001). The GLOBE score identified patients who would survive for 5 years and 10 years (responders) with positive predictive values of 98% and 88%, respectively. Up to 22% and 21% of events and nonevents, respectively, 10 years after initiation of treatment were correctly reclassified in comparison with earlier proposed criteria. In subgroups of patients aged <45, 45-52, 52-58, 58-66, and ≥66 years, age-specific GLOBE-score thresholds beyond which survival significantly deviated from matched healthy individuals were -0.52, 0.01, 0.60, 1.01 and 1.69, respectively. Transplant-free survival could still be accurately calculated by the GLOBE score with laboratory values collected at 2-5 years after treatment. We developed and validated scoring system (the GLOBE score) to predict transplant-free survival of ursodeoxycholic acid-treated patients with PBC. This score might be used to select strategies for treatment and care. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22428356','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22428356"><span>The TG/HDL-C ratio does not predict insulin resistance in overweight women of African descent: a study of South African, African American and West African women.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Knight, Michael G; Goedecke, Julia H; Ricks, Madia; Evans, Juliet; Levitt, Naomi S; Tulloch-Reid, Marshall K; Sumner, Anne E</p> <p>2011-01-01</p> <p>Women of African descent have a high prevalence of diseases caused by insulin resistance. To positively impact cardiometabolic health in Black women, effective screening tests for insulin resistance must be identified. Recently, the TG/HDL-C ratio has been recommended as a tool to predict insulin resistance in overweight people. While the ratio predicts insulin resistance in White women, it is ineffective in African American women. As there are no data for African women, we tested the ability of the TG/HDL-C ratio to predict insulin resistance in Black women from South Africa, West Africa and the United States. For comparison, the ratio was also tested in White women from South Africa. Participants were 801 women (157 Black South African, 382 African American, 119 West African, 143 White South African, age 36 +/- 9y [mean +/- SD]). Standardized scores were created from log-transformed homeostasis model assessment-insulin resistance values from each population. Participants in the upper third of their population distribution were classified as insulin-resistant. To predict insulin resistance by the TC/HDL-C ratio, area under the receiver operating characteristic (AUC-ROC) curve was used and criteria were: 0.50 for no discrimination and > or = 0.70 for acceptable. Seventy-one percent of the Black women were overweight vs 51% of White women (P<.01). In overweight White women, AUC-ROC curve for prediction of insulin resistance by TG/HDL-C was 0.76 +/- 0.06, but below the 0.70 threshold in each group of overweight Black women (Black South African: 0.64 +/- 0.06, African American: 0.66 +/- 0.03, and West African: 0.63 +/- 0.07). Therefore, TG/HDL-C does not predict insulin resistance in overweight African American women and this investigation extends that finding to overweight Black South African and West African women. Resources to identify effective markers of insulin resistance are needed to improve cardiometabolic health in women of African descent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4754661','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4754661"><span>Lazy workers are necessary for long-term sustainability in insect societies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hasegawa, Eisuke; Ishii, Yasunori; Tada, Koichiro; Kobayashi, Kazuya; Yoshimura, Jin</p> <p>2016-01-01</p> <p>Optimality theory predicts the maximization of productivity in social insect colonies, but many inactive workers are found in ant colonies. Indeed, the low short-term productivity of ant colonies is often the consequence of high variation among workers in the threshold to respond to task-related stimuli. Why is such an inefficient strategy among colonies maintained by natural selection? Here, we show that inactive workers are necessary for the long-term sustainability of a colony. Our simulation shows that colonies with variable thresholds persist longer than those with invariable thresholds because inactive workers perform the critical function of replacing active workers when they become fatigued. Evidence of the replacement of active workers by inactive workers has been found in ant colonies. Thus, the presence of inactive workers increases the long-term persistence of the colony at the expense of decreasing short-term productivity. Inactive workers may represent a bet-hedging strategy in response to environmental stochasticity. PMID:26880339</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22310671-multiple-percolation-tunneling-staircase-metal-semiconductor-nanoparticle-composites','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22310671-multiple-percolation-tunneling-staircase-metal-semiconductor-nanoparticle-composites"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mukherjee, Rupam; Huang, Zhi-Feng; Nadgorny, Boris</p> <p></p> <p>Multiple percolation transitions are observed in a binary system of RuO{sub 2}-CaCu{sub 3}Ti{sub 4}O{sub 12} metal-semiconductor nanoparticle composites near percolation thresholds. Apart from a classical percolation transition, associated with the appearance of a continuous conductance path through RuO{sub 2} metal oxide nanoparticles, at least two additional tunneling percolation transitions are detected in this composite system. Such behavior is consistent with the recently emerged picture of a quantum conductivity staircase, which predicts several percolation tunneling thresholds in a system with a hierarchy of local tunneling conductance, due to various degrees of proximity of adjacent conducting particles distributed in an insulating matrix.more » Here, we investigate a different type of percolation tunneling staircase, associated with a more complex conductive and insulating particle microstructure of two types of non-spherical constituents. As tunneling is strongly temperature dependent, we use variable temperature measurements to emphasize the hierarchical nature of consecutive tunneling transitions. The critical exponents corresponding to specific tunneling percolation thresholds are found to be nonuniversal and temperature dependent.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JMEP...25.5411P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JMEP...25.5411P"><span>A Modified Mechanical Threshold Stress Constitutive Model for Austenitic Stainless Steels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Prasad, K. Sajun; Gupta, Amit Kumar; Singh, Yashjeet; Singh, Swadesh Kumar</p> <p>2016-12-01</p> <p>This paper presents a modified mechanical threshold stress (m-MTS) constitutive model. The m-MTS model incorporates variable athermal and dynamic strain aging (DSA) Components to accurately predict the flow stress behavior of austenitic stainless steels (ASS)-316 and 304. Under strain rate variations between 0.01-0.0001 s-1, uniaxial tensile tests were conducted at temperatures ranging from 50-650 °C to evaluate the material constants of constitutive models. The test results revealed the high dependence of flow stress on strain, strain rate and temperature. In addition, it was observed that DSA occurred at elevated temperatures and very low strain rates, causing an increase in flow stress. While the original MTS model is capable of predicting the flow stress behavior for ASS, statistical parameters point out the inefficiency of the model when compared to other models such as Johnson Cook model, modified Zerilli-Armstrong (m-ZA) model, and modified Arrhenius-type equations (m-Arr). Therefore, in order to accurately model both the DSA and non-DSA regimes, the original MTS model was modified by incorporating variable athermal and DSA components. The suitability of the m-MTS model was assessed by comparing the statistical parameters. It was observed that the m-MTS model was highly accurate for the DSA regime when compared to the existing models. However, models like m-ZA and m-Arr showed better results for the non-DSA regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4554926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4554926"><span>Identifying an optimal cutpoint value for the diagnosis of hypertriglyceridemia in the nonfasting state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia</p> <p>2015-01-01</p> <p>Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29767649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29767649"><span>Experimental Psychological Stress on Quantitative Sensory Testing Response in Patients with Temporomandibular Disorders.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Araújo Oliveira Ferreira, Dyna Mara; Costa, Yuri Martins; de Quevedo, Henrique Müller; Bonjardim, Leonardo Rigoldi; Rodrigues Conti, Paulo César</p> <p>2018-05-15</p> <p>To assess the modulatory effects of experimental psychological stress on the somatosensory evaluation of myofascial temporomandibular disorder (TMD) patients. A total of 20 women with myofascial TMD and 20 age-matched healthy women were assessed by means of a standardized battery of quantitative sensory testing. Cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical pain threshold (MPT), wind-up ratio (WUR), and pressure pain threshold (PPT) were performed on the facial skin overlying the masseter muscle. The variables were measured in three sessions: before (baseline) and immediately after the Paced Auditory Serial Addition Task (PASAT) (stress) and then after a washout period of 20 to 30 minutes (poststress). Mixed analysis of variance (ANOVA) was applied to the data, and the significance level was set at P = .050. A significant main effect of the experimental session on all thermal tests was found (ANOVA: F > 4.10, P < .017), where detection tests presented an increase in thresholds in the poststress session compared to baseline (CDT, P = .012; WDT, P = .040) and pain thresholds were reduced in the stress (CPT, P < .001; HPT, P = .001) and poststress sessions (CPT, P = .005; HPT, P = .006) compared to baseline. In addition, a significant main effect of the study group on all mechanical tests (MPT, WUR, and PPT) was found (ANOVA: F > 4.65, P < .037), where TMD patients were more sensitive than healthy volunteers. Acute mental stress conditioning can modulate thermal sensitivity of the skin overlying the masseter in myofascial TMD patients and healthy volunteers. Therefore, psychological stress should be considered in order to perform an unbiased somatosensory assessment of TMD patients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2072956','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2072956"><span>Effect of predictive sign of acceleration on heart rate variability in passive translation situation: preliminary evidence using visual and vestibular stimuli in VR environment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Watanabe, Hiroshi; Teramoto, Wataru; Umemura, Hiroyuki</p> <p>2007-01-01</p> <p>Objective We studied the effects of the presentation of a visual sign that warned subjects of acceleration around the yaw and pitch axes in virtual reality (VR) on their heart rate variability. Methods Synchronization of the immersive virtual reality equipment (CAVE) and motion base system generated a driving scene and provided subjects with dynamic and wide-ranging depth information and vestibular input. The heart rate variability of 21 subjects was measured while the subjects observed a simulated driving scene for 16 minutes under three different conditions. Results When the predictive sign of the acceleration appeared 3500 ms before the acceleration, the index of the activity of the autonomic nervous system (low/high frequency ratio; LF/HF ratio) of subjects did not change much, whereas when no sign appeared the LF/HF ratio increased over the observation time. When the predictive sign of the acceleration appeared 750 ms before the acceleration, no systematic change occurred. Conclusion The visual sign which informed subjects of the acceleration affected the activity of the autonomic nervous system when it appeared long enough before the acceleration. Also, our results showed the importance of the interval between the sign and the event and the relationship between the gradual representation of events and their quantity. PMID:17903267</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25695167','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25695167"><span>Is procalcitonin-guided antimicrobial use cost-effective in adult patients with suspected bacterial infection and sepsis?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Harrison, Michelle; Collins, Curtis D</p> <p>2015-03-01</p> <p>Procalcitonin has emerged as a promising biomarker of bacterial infection. Published literature demonstrates that use of procalcitonin testing and an associated treatment pathway reduces duration of antibiotic therapy without impacting mortality. The objective of this study was to determine the financial impact of utilizing a procalcitonin-guided treatment algorithm in hospitalized patients with sepsis. Cost-minimization and cost-utility analysis. Hypothetical cohort of adult ICU patients with suspected bacterial infection and sepsis. Utilizing published clinical and economic data, a decision analytic model was developed from the U.S. hospital perspective. Effectiveness and utility measures were defined using cost-per-clinical episode and cost per quality-adjusted life years (QALYs). Upper and lower sensitivity ranges were determined for all inputs. Univariate and probabilistic sensitivity analyses assessed the robustness of our model and variables. Incremental cost-effectiveness ratios (ICERs) were calculated and compared to predetermined willingness-to-pay thresholds. Base-case results predicted the use of a procalcitonin-guided treatment algorithm dominated standard care with improved quality (0.0002 QALYs) and decreased overall treatment costs ($65). The model was sensitive to a number of key variables that had the potential to impact results, including algorithm adherence (<42.3%), number and cost of procalcitonin tests ordered (≥9 and >$46), days of antimicrobial reduction (<1.6 d), incidence of nephrotoxicity and rate of nephrotoxicity reduction. The combination of procalcitonin testing with an evidence-based treatment algorithm may improve patients' quality of life while decreasing costs in ICU patients with suspected bacterial infection and sepsis; however, results were highly dependent on a number of variables and assumptions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT.........9J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT.........9J"><span>Constraints on models for the Higgs boson with exotic spin and parity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Johnson, Emily Hannah</p> <p></p> <p>The production of a Higgs boson in association with a vector boson at the Tevatron offers a unique opportunity to study models for the Higgs boson with exotic spin J and parity P assignments. At the Tevatron the V H system is produced near threshold. Different JP assignments of the Higgs boson can be distinguished by examining the behavior of the cross section near threshold. The relatively low backgrounds at the Tevatron compared to the LHC put us in a unique position to study the direct decay of the Higgs boson to fermions. If the Higgs sector is more complex than predicted, studying the spin and parity of the Higgs boson in all decay modes is important. In this Thesis we will examine the WH → lnu bb¯ production and decay mode using 9.7 fb-1 of data collected by the D0 experiment in an attempt to derive constraints on models containing exotic values for the spin and parity of the Higgs boson. In particular, we will examine models for a Higgs boson with J P = 0- and JP = 2+. We use a likelihood ratio to quantify the degree to which our data are incompatible with exotic JP predictions for a range of possible production rates. Assuming the production cross section times branching ratio of the signals in the models considered is equal to the standard model prediction, the WH → lnu bb¯ mode alone is unable to reject either exotic model considered. We will also discuss the combination of the ZH → llbb¯, WH → lnubb¯, and V H → nunu bb¯ production modes at the D0 experiment and with the CDF experiment. When combining all three production modes at the D0 experiment we reject the JP = 0- and J P = 2+ hypotheses at the 97.6% CL and at the 99.0% CL, respectively, when assuming the signal production cross section times branching ratio is equal to the standard model predicted value. When combining with the CDF experiment we reject the JP = 0- and JP = 2 + hypotheses with significances of 5.0 standard deviations and 4.9 standard deviations, respectively.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5126000','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5126000"><span>Usefulness of the 6-minute walk test as a screening test for pulmonary arterial enlargement in COPD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Oki, Yutaro; Kaneko, Masahiro; Fujimoto, Yukari; Sakai, Hideki; Misu, Shogo; Mitani, Yuji; Yamaguchi, Takumi; Yasuda, Hisafumi; Ishikawa, Akira</p> <p>2016-01-01</p> <p>Purpose Pulmonary hypertension and exercise-induced oxygen desaturation (EID) influence acute exacerbation of COPD. Computed tomography (CT)-detected pulmonary artery (PA) enlargement is independently associated with acute COPD exacerbations. Associations between PA to aorta (PA:A) ratio and EID in patients with COPD have not been reported. We hypothesized that the PA:A ratio correlated with EID and that results of the 6-minute walk test (6MWT) would be useful for predicting the risk associated with PA:A >1. Patients and methods We retrospectively measured lung function, 6MWT, emphysema area, and PA enlargement on CT in 64 patients with COPD. The patients were classified into groups with PA:A ≤1 and >1. Receiver-operating characteristic curves were used to determine the threshold values with the best cutoff points to predict patients with PA:A >1. Results The PA:A >1 group had lower forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), FEV1:FVC ratio, diffusion capacity of lung carbon monoxide, 6MW distance, and baseline peripheral oxygen saturation (SpO2), lowest SpO2, highest modified Borg scale results, percentage low-attenuation area, and history of acute COPD exacerbations ≤1 year, and worse BODE (Body mass index, airflow Obstruction, Dyspnea, and Exercise) index results (P<0.05). Predicted PA:A >1 was determined for SpO2 during 6MWT (best cutoff point 89%, area under the curve 0.94, 95% confidence interval 0.88–1). SpO2 <90% during 6MWT showed a sensitivity of 93.1, specificity of 94.3, positive predictive value of 93.1, negative predictive value of 94.3, positive likelihood ratio of 16.2, and negative likelihood ratio of 0.07. Conclusion Lowest SpO2 during 6MWT may predict CT-measured PA:A, and lowest SpO2 <89% during 6MWT is excellent for detecting pulmonary hypertension in COPD. PMID:27920514</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28788567','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28788567"><span>Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al₂O₃ Catalyst Using Response Surface Methodology (RSM).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed</p> <p>2014-03-19</p> <p>In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X₁); the flow rate (X₂); the catalyst weight (X₃); the catalyst loading (X₄) and the glycerol-water molar ratio (X₅) on the H₂ yield (Y₁) and the conversion of glycerol to gaseous products (Y₂) were explored. Using multiple regression analysis; the experimental results of the H₂ yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H₂ yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t -test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28245145','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28245145"><span>Differentiation of Benign and Malignant Head and Neck Lesions With Diffusion Tensor Imaging and DWI.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Koontz, Nicholas A; Wiggins, Richard H</p> <p>2017-05-01</p> <p>The purpose of this study was to determine whether diffusion tensor imaging (DTI) can be used to differentiate between benign and malignant head and neck lesions. This retrospective study included patients with head and neck lesions who underwent clinical MRI at 1.5 or 3 T with DWI or DTI parameters. ROI analysis was performed, with lesion-to-medulla apparent diffusion coefficient (ADC) ratios generated. Sixty-five patients with head and neck lesions were included (71 benign, 40 malignant). Twenty-one patients had multiple lesions. Statistically significant differences (p < 0.001) were seen in the mean ADC values ± SD of malignant and benign lesions (0.55 × 10 -3 ± 0.14 × 10 -3 mm 2 /s vs 0.89 × 10 -3 ± 0.29 × 10 -3 mm 2 /s, respectively) and in the mean ADC ratios of malignant and benign lesions (0.88 ± 0.21 vs 1.40 ± 0.44, respectively) with DTI parameters. DTI and DWI parameters produced similar mean ADC ratio values for malignant (0.88 ± 0.21 and 0.92 ± 0.54, respectively) and benign lesions (1.40 ± 0.44 and 1.79 ± 0.52, respectively). ADC ratio thresholds for predicting malignancy for DTI (ADC ratio ≤ 1) and DWI (ADC ratio ≤ 0.94) were also similar. DTI is a useful predictor of malignancy for head and neck lesions, with ADC values of malignant lesions significantly lower than those of benign lesions. DTI ADC values were lower than DWI ADC values for all head and neck lesions in our study group, often below reported malignant DWI threshold values. Normalization of ADC values to an internal control resulted in similar ADC ratios on DWI and DTI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24530668','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24530668"><span>Pre-procedural bioimpedance vectorial analysis of fluid status and prediction of contrast-induced acute kidney injury.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maioli, Mauro; Toso, Anna; Leoncini, Mario; Musilli, Nicola; Bellandi, Francesco; Rosner, Mitchell H; McCullough, Peter A; Ronco, Claudio</p> <p>2014-04-15</p> <p>The aim of this study was to evaluate the relationship between pre-procedural fluid status assessed by bioimpedance vector analysis (BIVA) and development of contrast-induced acute kidney injury (CI-AKI). Accurate fluid management in patients undergoing angiographic procedures is of critical importance in limiting the risk of CI-AKI. Therefore, establishing peri-procedural fluid volume related to increased risk of CI-AKI development is essential. We evaluated the fluid status by BIVA of 900 consecutive patients with stable coronary artery disease (CAD) immediately before coronary angiography, measuring the resistance/height (R/H) ratio and impedance/height (Z/H) vector. CI-AKI was defined as an increase in serum creatinine ≥0.5 mg/dl above baseline within 3 days after contrast administration (iodixanol). CI-AKI occurred in 54 patients (6.0%). Pre-procedural R/H ratios were significantly higher in patients with CI-AKI than without CI-AKI (395 ± 71 Ohm/m vs. 352 ± 58 Ohm/m, p = 0.001 for women; 303 ± 59 Ohm/m vs. 279 ± 45 Ohm/m, p = 0.009 for men), indicating lower fluid volume in the patients with CI-AKI. When patients were stratified according to R/H ratio, there was an almost 3-fold higher risk in patients with higher values (odds ratio [OR]: 2.9; 95% confidence interval [CI]: 1.5 to 5.5; p = 0.002). The optimal receiver-operating characteristic curve analysis threshold values of R/H ratio for predicting CI-AKI were 380 Ohm/m for women and 315 Ohm/m for men. R/H ratio above these thresholds was found to be a significant and independent predictor of CI-AKI (OR: 3.1; 95% CI: 1.8 to 5.5; p = 0.001). Lower fluid status evaluated by BIVA immediately before contrast medium administration resulted in a significant and independent predictor of CI-AKI in patients with stable CAD. This simple noninvasive analysis should be tested in guiding tailored volume repletion. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26261428','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26261428"><span>Do physiological measures predict selected CrossFit(®) benchmark performance?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R</p> <p>2015-01-01</p> <p>CrossFit(®) is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs). Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time), "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=-0.88 and -0.65, respectively) and anaerobic threshold (r=-0.61 and -0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R (2)=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AtmEn.184..129G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AtmEn.184..129G"><span>Artificial neural network model for ozone concentration estimation and Monte Carlo analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, Meng; Yin, Liting; Ning, Jicai</p> <p>2018-07-01</p> <p>Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790009325','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790009325"><span>The human as a detector of changes in variance and bandwidth</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Curry, R. E.; Govindaraj, T.</p> <p>1977-01-01</p> <p>The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9110550','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9110550"><span>The effect of threshold amounts for reporting malpractice payments to the National Practitioner Data Bank: analysis using the closed claims data base of the Office of the Assistant Secretary of Defense (Health Affairs).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Metter, E J; Granville, R L; Kussman, M J</p> <p>1997-04-01</p> <p>The study determines the extent to which payment thresholds for reporting malpractice claims to the National Practitioner Data Bank identifies substandard health care delivery in the Department of Defense. Relevant data were available on 2,291 of 2,576 medical malpractice claims reported to the closed medical malpractice case data base of the Office of the Assistant Secretary of Defense (Health Affairs). Amount paid was analyzed as a diagnostic test using standard of care assessment from each military Surgeon General office as the criterion. Using different paid threshold amounts per claim as a positive test, the sensitivity of identifying substandard care declined from 0.69 for all paid cases to 0.41 for claims over $40,000. Specificity increased from 0.75 for all paid claims to 0.89 for claims over $40,000. Positive and negative predictive values and likelihood ratio were similar at all thresholds. Malpractice case payment was of limited value for identifying substandard medical practice. All paid claims missed about 30% of substandard care, and reported about 25% of acceptable medical practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16509967','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16509967"><span>Parameter selection for and implementation of a web-based decision-support tool to predict extubation outcome in premature infants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S</p> <p>2006-03-01</p> <p>Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AeoRe..23...51N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AeoRe..23...51N"><span>Evaporative sodium salt crust development and its wind tunnel derived transport dynamics under variable climatic conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nield, Joanna M.; McKenna Neuman, Cheryl; O'Brien, Patrick; Bryant, Robert G.; Wiggs, Giles F. S.</p> <p>2016-12-01</p> <p>Playas (or ephemeral lakes) can be significant sources of dust, but they are typically covered by salt crusts of variable mineralogy and these introduce uncertainty into dust emission predictions. Despite the importance of crust mineralogy to emission potential, little is known about (i) the effect of short-term changes in temperature and relative humidity on the erodibility of these crusts, and (ii) the influence of crust degradation and mineralogy on wind speed threshold for dust emission. Our understanding of systems where emission is not driven by impacts from saltators is particularly poor. This paper describes a wind tunnel study in which dust emission in the absence of saltating particles was measured for a suite of climatic conditions and salt crust types commonly found on Sua Pan, Botswana. The crusts were found to be non-emissive under climate conditions characteristic of dawn and early morning, as compared to hot and dry daytime conditions when the wind speed threshold for dust emission appears to be highly variable, depending upon salt crust physicochemistry. Significantly, sodium sulphate rich crusts were found to be more emissive than crusts formed from sodium chloride, while degraded versions of both crusts had a lower emission threshold than fresh, continuous crusts. The results from this study are in agreement with in-situ field measurements and confirm that dust emission from salt crusted surfaces can occur without saltation, although the vertical fluxes are orders of magnitude lower (∼10 μg/m/s) than for aeolian systems where entrainment is driven by particle impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvD..93g1101M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvD..93g1101M"><span>Measurement of partonic nuclear effects in deep-inelastic neutrino scattering using MINERvA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mousseau, J.; Wospakrik, M.; Aliaga, L.; Altinok, O.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Cai, T.; Carneiro, M. F.; Christy, M. E.; Chvojka, J.; da Motta, H.; Devan, J.; Dytman, S. A.; Díaz, G. A.; Eberly, B.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Gallagher, H.; Ghosh, A.; Golan, T.; Gran, R.; Harris, D. A.; Higuera, A.; Hurtado, K.; Kiveni, M.; Kleykamp, J.; Kordosky, M.; Le, T.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; Martinez Caicedo, D. A.; McFarland, K. S.; McGivern, C. L.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Naples, D.; Nelson, J. K.; Norrick, A.; Nuruzzaman; Osta, J.; Paolone, V.; Park, J.; Patrick, C. E.; Perdue, G. N.; Rakotondravohitra, L.; Ramirez, M. A.; Ransome, R. D.; Ray, H.; Ren, L.; Rimal, D.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Schmitz, D. W.; Solano Salinas, C. J.; Tagg, N.; Tice, B. G.; Valencia, E.; Walton, T.; Wolcott, J.; Zavala, G.; Zhang, D.; Minerν A Collaboration</p> <p>2016-04-01</p> <p>The MINERvA Collaboration reports a novel study of neutrino-nucleus charged-current deep inelastic scattering (DIS) using the same neutrino beam incident on targets of polystyrene, graphite, iron, and lead. Results are presented as ratios of C, Fe, and Pb to CH. The ratios of total DIS cross sections as a function of neutrino energy and flux-integrated differential cross sections as a function of the Bjorken scaling variable x are presented in the neutrino-energy range of 5-50 GeV. Based on the predictions of charged-lepton scattering ratios, good agreement is found between the data and prediction at medium x and low neutrino energy. However, the ratios appear to be below predictions in the vicinity of the nuclear shadowing region, x <0.1 . This apparent deficit, reflected in the DIS cross-section ratio at high Eν, is consistent with previous MINERvA observations [B. Tice et al. (MINERvA Collaboration), Phys. Rev. Lett. 112, 231801 (2014).] and with the predicted onset of nuclear shadowing with the axial-vector current in neutrino scattering.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1251200-measurement-partonic-nuclear-effects-deep-inelastic-neutrino-scattering-using-minerva','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1251200-measurement-partonic-nuclear-effects-deep-inelastic-neutrino-scattering-using-minerva"><span>Measurement of partonic nuclear effects in deep-inelastic neutrino scattering using MINERvA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Mousseau, J.</p> <p>2016-04-19</p> <p>Here, the MINERvA Collaboration reports a novel study of neutrino-nucleus charged-current deep inelastic scattering (DIS) using the same neutrino beam incident on targets of polystyrene, graphite, iron, and lead. Results are presented as ratios of C, Fe, and Pb to CH. The ratios of total DIS cross sections as a function of neutrino energy and flux-integrated differential cross sections as a function of the Bjorken scaling variable x are presented in the neutrino-energy range of 5–50 GeV. Based on the predictions of charged-lepton scattering ratios, good agreement is found between the data and prediction at medium x and low neutrino energy.more » However, the ratios appear to be below predictions in the vicinity of the nuclear shadowing region, x < 0.1. This apparent deficit, reflected in the DIS cross-section ratio at high Eν, is consistent with previous MINERvA observations [B. Tice (MINERvA Collaboration), Phys. Rev. Lett. 112, 231801 (2014).] and with the predicted onset of nuclear shadowing with the axial-vector current in neutrino scattering.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990078575&hterms=fractions&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dfractions','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990078575&hterms=fractions&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dfractions"><span>A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, Samantha A.; DelGenio, Anthony D.</p> <p>1998-01-01</p> <p>A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11324845','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11324845"><span>High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Frank, T</p> <p>2001-04-01</p> <p>The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4648514','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4648514"><span>Comparison of Various Anthropometric Indices as Risk Factors for Hearing Impairment in Asian Women</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Kyu Yup; Choi, Eun Woo; Do, Jun Young</p> <p>2015-01-01</p> <p>Background The objective of the present study was to examine the associations between various anthropometric measures and metabolic syndrome and hearing impairment in Asian women. Methods We identified 11,755 women who underwent voluntary routine health checkups at Yeungnam University Hospital between June 2008 and April 2014. Among these patients, 2,485 participants were <40 years old, and 1,072 participants lacked information regarding their laboratory findings or hearing and were therefore excluded. In total 8,198 participants were recruited into our study. Results The AUROC value for metabolic syndrome was 0.790 for the waist to hip ratio (WHR). The cutoff value was 0.939. The sensitivity and specificity for predicting metabolic syndrome were 72.7% and 71.7%, respectively. The AUROC value for hearing loss was 0.758 for WHR. The cutoff value was 0.932. The sensitivity and specificity for predicting hearing loss were 65.8% and 73.4%, respectively. The WHR had the highest AUC and was the best predictor of metabolic syndrome and hearing loss. Univariate and multivariate linear regression analyses showed that WHR levels were positively associated with four hearing thresholds including averaged hearing threshold and low, middle, and high frequency thresholds. In addition, multivariate logistic analysis revealed that those with a high WHR had a 1.347–fold increased risk of hearing loss compared with the participants with a low WHR. Conclusion Our results demonstrated that WHR may be a surrogate marker for predicting the risk of hearing loss resulting from metabolic syndrome. PMID:26575369</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28551181','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28551181"><span>Use of milk fatty acids to estimate plasma nonesterified fatty acid concentrations as an indicator of animal energy balance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dórea, J R R; French, E A; Armentano, L E</p> <p>2017-08-01</p> <p>Negative energy balance is an important part of the lactation cycle, and measuring the current energy balance of a cow is useful in both applied and research settings. The objectives of this study were (1) to determine if milk fatty acid (FA) proportions were consistently related to plasma nonesterified fatty acids (NEFA); (2) to determine if an individual cow with a measured milk FA profile is above or below a NEFA concentration, (3) to test the universality of the models developed within the University of Wisconsin and US Dairy Forage Research Center cows. Blood samples were collected on the same day as milk sampling from 105 Holstein cows from 3 studies. Plasma NEFA was quantified and a threshold of 600 µEq/L was applied to classify animals above this concentration as having high NEFA (NEFA high ). Thirty milk FA proportions and 4 milk FA ratios were screened to evaluate their capacity to classify cows with NEFA high according to determined milk FA threshold. In addition, 6 linear regression models were created using individual milk FA proportions and ratios. To evaluate the universality of the linear relationship between milk FA and plasma NEFA found in the internal data set, 90 treatment means from 21 papers published in the literature were compiled to test the model predictions. From the 30 screened milk FA, the odd short-chain fatty acids (C7:0, C9:0, C11:0, and C13:0) had sensitivity slightly greater than the other short-chain fatty acids (83.3, 94.8, 80.0, and 85.9%, respectively). The sensitivities for milk FA C6:0, C8:0, C10:0, and C12:0 were 78.8, 85.3, 80.1, and 83.9%, respectively. The threshold values to detect NEFA high cows for the last group of milk FA were ≤2.0, ≤0.94, ≤1.4, and ≤1.8 g/100 g of FA, respectively. The milk FA C14:0 and C15:0 had sensitivities of 88.7 and 85.0% and a threshold of ≤6.8 and ≤0.53 g/100 g of FA, respectively. The linear regressions using the milk FA ratios C18:1 to C15:0 and C17:0 to C15:0 presented lower root mean square error (RMSE = 191 and 179 µEq/L, respectively) in comparison with individual milk FA proportions (RMSE = 194 µEq/L), C18:1 to even short-medium-chain fatty acid (C4:0-C12:0) ratio (RMSE = 220 µEq/L), and C18:1 to C14:0 (RMSE = 199 µEq/L). Models using milk FA ratios C18:1 to C15:0 and C17:0 to C15:0 had a better fit with the external data set in comparison with the other models. Plasma NEFA can be predicted by linear regression models using milk FA ratios. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26765451','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26765451"><span>Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Wen-Chung; Wu, Yun-Chun</p> <p>2016-01-01</p> <p>The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3639076','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3639076"><span>A threshold method for immunological correlates of protection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JBO....19a7001C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JBO....19a7001C"><span>Near-infrared diffuse optical monitoring of cerebral blood flow and oxygenation for the prediction of vasovagal syncope</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cheng, Ran; Shang, Yu; Wang, Siqi; Evans, Joyce M.; Rayapati, Abner; Randall, David C.; Yu, Guoqiang</p> <p>2014-01-01</p> <p>Significant drops in arterial blood pressure and cerebral hemodynamics have been previously observed during vasovagal syncope (VVS). Continuous and simultaneous monitoring of these physiological variables during VVS is rare, but critical for determining which variable is the most sensitive parameter to predict VVS. The present study used a novel custom-designed diffuse correlation spectroscopy flow-oximeter and a finger plethysmograph to simultaneously monitor relative changes of cerebral blood flow (rCBF), cerebral oxygenation (i.e., oxygenated/deoxygenated/total hemoglobin concentration: r[HbO2]/r[Hb]/rTHC), and mean arterial pressure (rMAP) during 70 deg head-up tilt (HUT) in 14 healthy adults. Six subjects developed presyncope during HUT. Two-stage physiological responses during HUT were observed in the presyncopal group: slow and small changes in measured variables (i.e., Stage I), followed by rapid and dramatic decreases in rMAP, rCBF, r[HbO2], and rTHC (i.e., Stage II). Compared to other physiological variables, rCBF reached its breakpoint between the two stages earliest and had the largest decrease (76±8%) during presyncope. Our results suggest that rCBF has the best sensitivity for the assessment of VVS. Most importantly, a threshold of ˜50% rCBF decline completely separated the subjects from those without presyncope, suggesting its potential for predicting VVS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17407901','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17407901"><span>Threshold and channel interaction in cochlear implant users: evaluation of the tripolar electrode configuration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bierer, Julie Arenberg</p> <p>2007-03-01</p> <p>The efficacy of cochlear implants is limited by spatial and temporal interactions among channels. This study explores the spatially restricted tripolar electrode configuration and compares it to bipolar and monopolar stimulation. Measures of threshold and channel interaction were obtained from nine subjects implanted with the Clarion HiFocus-I electrode array. Stimuli were biphasic pulses delivered at 1020 pulses/s. Threshold increased from monopolar to bipolar to tripolar stimulation and was most variable across channels with the tripolar configuration. Channel interaction, quantified by the shift in threshold between single- and two-channel stimulation, occurred for all three configurations but was largest for the monopolar and simultaneous conditions. The threshold shifts with simultaneous tripolar stimulation were slightly smaller than with bipolar and were not as strongly affected by the timing of the two channel stimulation as was monopolar. The subjects' performances on clinical speech tests were correlated with channel-to-channel variability in tripolar threshold, such that greater variability was related to poorer performance. The data suggest that tripolar channels with high thresholds may reveal cochlear regions of low neuron survival or poor electrode placement.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JThSc..27..213T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JThSc..27..213T"><span>Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin</p> <p>2018-06-01</p> <p>In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatCC...7..839L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatCC...7..839L"><span>Coral bleaching pathways under the control of regional temperature variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Langlais, C. E.; Lenton, A.; Heron, S. F.; Evenhuis, C.; Sen Gupta, A.; Brown, J. N.; Kuchinke, M.</p> <p>2017-11-01</p> <p>Increasing sea surface temperatures (SSTs) are predicted to adversely impact coral populations worldwide through increasing thermal bleaching events. Future bleaching is unlikely to be spatially uniform. Therefore, understanding what determines regional differences will be critical for adaptation management. Here, using a cumulative heat stress metric, we show that characteristics of regional SST determine the future bleaching risk patterns. Incorporating observed information on SST variability, in assessing future bleaching risk, provides novel options for management strategies. As a consequence, the known biases in climate model variability and the uncertainties in regional warming rate across climate models are less detrimental than previously thought. We also show that the thresholds used to indicate reef viability can strongly influence a decision on what constitutes a potential refugia. Observing and understanding the drivers of regional variability, and the viability limits of coral reefs, is therefore critical for making meaningful projections of coral bleaching risk.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28322633','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28322633"><span>Predicting Use of Nurse Care Coordination by Older Adults With Chronic Conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vanderboom, Catherine E; Holland, Diane E; Mandrekar, Jay; Lohse, Christine M; Witwer, Stephanie G; Hunt, Vicki L</p> <p>2017-07-01</p> <p>To be effective, nurse care coordination must be targeted at individuals who will use the service. The purpose of this study was to identify variables that predicted use of care coordination by primary care patients. Data on the potential predictor variables were obtained from patient interviews, the electronic health record, and an administrative database of 178 adults eligible for care coordination. Use of care coordination was obtained from an administrative database. A multivariable logistic regression model was developed using a bootstrap sampling approach. Variables predicting use of care coordination were dependence in both activities of daily living (ADL) and instrumental activities of daily living (IADL; odds ratio [OR] = 5.30, p = .002), independent for ADL but dependent for IADL (OR = 2.68, p = .01), and number of prescription medications (OR = 1.12, p = .002). Consideration of these variables may improve identification of patients to target for care coordination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.6032M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.6032M"><span>Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana</p> <p>2016-04-01</p> <p>Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/52338','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/52338"><span>Predicting logging residue volumes in the Pacific Northwest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Erik C. Berg; Todd A. Morgan; Eric A. Simmons; Stan Zarnoch; Micah G. Scudder</p> <p>2016-01-01</p> <p>Pacific Northwest forest managers seek estimates of post-timber-harvest woody residue volumes and biomass that can be related to readily available site- and tree-level attributes. To better predict residue production, researchers investigated variability in residue ratios, growing-stock residue volume per mill-delivered volume, across Idaho, Montana, Oregon, and...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED359193.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED359193.pdf"><span>A Model for Investigating Predictive Validity at Highly Selective Institutions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gross, Alan L.; And Others</p> <p></p> <p>A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/1392','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/1392"><span>A Volume and Taper Prediction System for Bald Cypress</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Bernard R. Parresol; James E. Hotvedt; Quang V. Cao</p> <p>1987-01-01</p> <p>A volume and taper prediction system based on d10 and consisting of a total volume equation, two volume ratio equations (one for diameter limits, the other for height limits), and a taper equation was developed for bald cypress using sample tree data collected in Louisiana. Normal diameter (dn), a subjective variable-...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28104273','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28104273"><span>Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad</p> <p>2017-04-01</p> <p>Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27702836','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27702836"><span>The predictive value of 18F-FDG PET for pathological response of primary tumor in patients with esophageal cancer during or after neoadjuvant chemoradiotherapy: a meta-analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cong, Lihong; Wang, Shikun; Gao, Teng; Hu, Likuan</p> <p>2016-12-01</p> <p>We want to review the value of 18-fluoro-deoxy-glucose positron emission tomography for response prediction of primary tumor in patients with esophageal cancer during or after neoadjuvant chemoradiotherapy. Studies were searched in Pubmed, Embase and Cochrane Library with specific search strategy. The published articles were included according to the criteria established in advance. The included studies were divided into two groups according to the time of the repeat positron emission tomography: during (Group A) or after neoadjuvant chemoradiotherapy (Group B). The studies that performed the repeat positron emission tomography after neoadjuvant chemoradiotherapy were graded Quality Assessment of Diagnostic Accuracy Studies. The pooled sensitivity, specificity and diagnostic odds ratio were obtained for both groups on the basis of no-existing of threshold effect. Fifteen studies were included in the present study. The threshold effect did not exist in both groups. The pooled sensitivity, specificity and diagnostic odds ratio were 85%, 59%, 6.82 with 95% confidence interval 76-91%, 48-69%, 2.25-20.72 in Group A. The equivalent values were 67%, 69%, 6.34 with 95% confidence interval 60-73%, 63-74%, 2.08-19.34 in Group B. The pooled sensitivity was 90% in four studies that enrolled patients with esophageal squamous cell carcinoma merely in Group B. According to the present data, positron emission tomography should not be used routinely to guide treatment strategy in esophageal cancer patients. We speculated that positron emission tomography could be used as a tool to predict treatment response after neoadjuvant chemoradiotherapy in patients with esophageal squamous cell carcinoma. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27149973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27149973"><span>Diagnostic accuracy of spot urine protein-to-creatinine ratio for proteinuria and its association with adverse pregnancy outcomes in Chinese pregnant patients with pre-eclampsia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheung, H C; Leung, K Y; Choi, C H</p> <p>2016-06-01</p> <p>International guidelines have endorsed spot urine protein-to-creatinine ratio of >30 mg protein/mmol creatinine as an alternative to a 24-hour urine sample to represent significant proteinuria. This study aimed to determine the accuracy of spot urine protein-to-creatinine ratio in predicting significant proteinuria and adverse pregnancy outcome. This case series was conducted in a regional obstetric unit in Hong Kong. A total of 120 Chinese pregnant patients with pre-eclampsia delivered at Queen Elizabeth Hospital from January 2011 to December 2013 were included. Relationship of spot urine protein-to-creatinine ratio and 24-hour proteinuria; accuracy of the ratio against 24-hour urine protein at different cut-offs; and relationship of such ratio and adverse pregnancy outcome were studied. Spot urine protein-to-creatinine ratio was correlated with 24-hour urine protein with Pearson correlation coefficient of 0.914 (P<0.0001) when the ratio was <200 mg/mmol. The optimal threshold of spot urine protein-to-creatinine ratio for diagnosing proteinuria in Chinese pregnant patients (33 mg/mmol) was similar to that stated in the international literature (30 mg/mmol). A cut-off of 20 mg/mmol provided a 100% sensitivity, and 52 mg/mmol provided a 100% specificity. There was no significant difference in spot urine protein-to-creatinine ratio between cases with and without adverse pregnancy outcome. Spot urine protein-to-creatinine ratio had a positive and significant correlation with 24-hour urine results in Chinese pre-eclamptic women when the ratio was <200 mg/mmol. Nonetheless, this ratio was not predictive of adverse pregnancy outcome.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2836401','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2836401"><span>Identifying cochlear implant channels with poor electrode-neuron interface: partial tripolar, single-channel thresholds and psychophysical tuning curves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bierer, Julie Arenberg; Faulkner, Kathleen F.</p> <p>2010-01-01</p> <p>Objectives The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar electrode configuration are predictive of wide or tip-shifted psychophysical tuning curves. Design Data were collected from five cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked psychophysical tuning curves were obtained for channels with the highest, lowest, and median tripolar (σ=1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (σ=0) or a more focused partial tripolar (σ ≥ 0.55) configuration. The masker channel and level were varied while the configuration was fixed to σ = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Results Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, σ, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, partial tripolar stimulus, had significantly broader psychophysical tuning curves than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with partial tripolar probes, for both the highest and lowest threshold channels. Conclusions These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception. PMID:20090533</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Cryo...83....8K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Cryo...83....8K"><span>Multiplicities and thermal runaway of current leads for superconducting magnets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krikkis, Rizos N.</p> <p>2017-04-01</p> <p>The multiple solutions of conduction and vapor cooled copper leads modeling current delivery to a superconducting magnet have been numerically calculated. Both ideal convection and convection with a finite heat transfer coefficient for an imposed coolant mass flow rate have been considered. Because of the nonlinearities introduced by the temperature dependent material properties, two solutions exist, one stable and one unstable regardless of the cooling method. The limit points separating the stable form the unstable steady states form the blow-up threshold beyond which, any further increase in the operating current results in a thermal runway. An interesting finding is that the multiplicity persists even when the cold end temperature is raised above the liquid nitrogen temperature. The effect of various parameters such as the residual resistivity ratio, the overcurrent and the variable conductor cross section on the bifurcation structure and their stabilization effect on the blow-up threshold is also evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3386766','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3386766"><span>Development of a Voice Activity Controlled Noise Canceller</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini</p> <p>2012-01-01</p> <p>In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28581671','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28581671"><span>Digital image analysis supports a nuclear-to-cytoplasmic ratio cutoff value of 0.5 for atypical urothelial cells.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hang, Jen-Fan; Charu, Vivek; Zhang, M Lisa; VandenBussche, Christopher J</p> <p>2017-09-01</p> <p>An elevated nuclear-to-cytoplasmic (N:C) ratio of ≥0.5 is a required criterion for the diagnosis of atypical urothelial cells (AUC) in The Paris System for Reporting Urinary Cytology. To validate the N:C ratio cutoff value and its predictive power for high-grade urothelial carcinoma (HGUC), the authors retrospectively reviewed the urinary tract cytology specimens of 15 cases of AUC with HGUC on follow-up (AUC-HGUC) and 33 cases of AUC without HGUC on follow-up (AUC-N-HGUC). The number of atypical cells in each case was recorded, and each atypical cell was photographed and digitally examined to calculate the nuclear size and N:C ratio. On average, the maximum N:C ratios of atypical cells were significantly different between the AUC-HGUC and AUC-N-HGUC cohorts (0.53 vs 0.43; P =.00009), whereas the maximum nuclear sizes of atypical cells (153.43 μM 2 vs 201.47 μM 2 ; P = .69) and the number of atypical cells per case (10.13 vs 7.88; P = .12) were not found to be significantly different. Receiver operating characteristic analysis demonstrated that the maximum N:C ratio alone had high discriminatory capacity (area under the curve, 79.19%; 95% confidence interval, 64.19%-94.19%). The optimal maximum N:C ratio threshold was 0.486, giving a sensitivity of 73.3% and a specificity of 84.8% for predicting HGUC on follow-up. The identification of AUC with an N:C ratio >0.486 has a high predictive power for HGUC on follow-up in AUC specimens. This justifies using the N:C ratio as a required criterion for the AUC category. Individual laboratories using different cytopreparation methods may require independent validation of the N:C ratio cutoff value. Cancer Cytopathol 2017;125:710-6. © 2017 American Cancer Society. © 2017 American Cancer Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25565487','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25565487"><span>Risk factors and screening instruments to predict adverse outcomes for undifferentiated older emergency department patients: a systematic review and meta-analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Carpenter, Christopher R; Shelton, Erica; Fowler, Susan; Suffoletto, Brian; Platts-Mills, Timothy F; Rothman, Richard E; Hogan, Teresita M</p> <p>2015-01-01</p> <p>A significant proportion of geriatric patients experience suboptimal outcomes following episodes of emergency department (ED) care. Risk stratification screening instruments exist to distinguish vulnerable subsets, but their prognostic accuracy varies. This systematic review quantifies the prognostic accuracy of individual risk factors and ED-validated screening instruments to distinguish patients more or less likely to experience short-term adverse outcomes like unanticipated ED returns, hospital readmissions, functional decline, or death. A medical librarian and two emergency physicians conducted a medical literature search of PubMed, EMBASE, SCOPUS, CENTRAL, and ClinicalTrials.gov using numerous combinations of search terms, including emergency medical services, risk stratification, geriatric, and multiple related MeSH terms in hundreds of combinations. Two authors hand-searched relevant specialty society research abstracts. Two physicians independently reviewed all abstracts and used the revised Quality Assessment of Diagnostic Accuracy Studies instrument to assess individual study quality. When two or more qualitatively similar studies were identified, meta-analysis was conducted using Meta-DiSc software. Primary outcomes were sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) for predictors of adverse outcomes at 1 to 12 months after the ED encounters. A hypothetical test-treatment threshold analysis was constructed based on the meta-analytic summary estimate of prognostic accuracy for one outcome. A total of 7,940 unique citations were identified yielding 34 studies for inclusion in this systematic review. Studies were significantly heterogeneous in terms of country, outcomes assessed, and the timing of post-ED outcome assessments. All studies occurred in ED settings and none used published clinical decision rule derivation methodology. Individual risk factors assessed included dementia, delirium, age, dependency, malnutrition, pressure sore risk, and self-rated health. None of these risk factors significantly increased the risk of adverse outcome (LR+ range = 0.78 to 2.84). The absence of dependency reduces the risk of 1-year mortality (LR- = 0.27) and nursing home placement (LR- = 0.27). Five constructs of frailty were evaluated, but none increased or decreased the risk of adverse outcome. Three instruments were evaluated in the meta-analysis: Identification of Seniors at Risk, Triage Risk Screening Tool, and Variables Indicative of Placement Risk. None of these instruments significantly increased (LR+ range for various outcomes = 0.98 to 1.40) or decreased (LR- range = 0.53 to 1.11) the risk of adverse outcomes. The test threshold for 3-month functional decline based on the most accurate instrument was 42%, and the treatment threshold was 61%. Risk stratification of geriatric adults following ED care is limited by the lack of pragmatic, accurate, and reliable instruments. Although absence of dependency reduces the risk of 1-year mortality, no individual risk factor, frailty construct, or risk assessment instrument accurately predicts risk of adverse outcomes in older ED patients. Existing instruments designed to risk stratify older ED patients do not accurately distinguish high- or low-risk subsets. Clinicians, educators, and policy-makers should not use these instruments as valid predictors of post-ED adverse outcomes. Future research to derive and validate feasible ED instruments to distinguish vulnerable elders should employ published decision instrument methods and examine the contributions of alternative variables, such as health literacy and dementia, which often remain clinically occult. © 2014 by the Society for Academic Emergency Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28393672','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28393672"><span>Radiation Awareness for Endovascular Abdominal Aortic Aneurysm Repair in the Hybrid Operating Room. An Instant Patient Risk Chart for Daily Practice.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>de Ruiter, Quirina M; Gijsberts, Crystel M; Hazenberg, Constantijn E; Moll, Frans L; van Herwaarden, Joost A</p> <p>2017-06-01</p> <p>To determine which patient and C-arm characteristics are the strongest predictors of intraoperative patient radiation dose rates (DRs) during endovascular aneurysm repair (EVAR) procedures and create a patient risk chart. A retrospective analysis was performed of 74 EVAR procedures, including 16,889 X-ray runs using fixed C-arm imaging equipment. Four multivariate log-linear mixed models (with patient as a random effect) were constructed. Mean air kerma DR (DR AK , mGy/s) and the mean dose area product DR (DR DAP , mGycm 2 /s) were the outcome variables utilized for fluoroscopy as differentiated from digital subtraction angiography (DSA). These models were used to predict the maximum radiation duration allowed before a 2-Gy skin threshold (for DR AK ) or a 500-Gycm 2 threshold (for DR DAP ) was reached. The strongest predictor of DR AK and DR DAP for fluoroscopy imaging was the radiation protocol, with an increase of 200% when changing from "low" to "medium" and 410% from "low" to "normal." The strongest predictors of DR AK and DR DAP for DSA were C-arm angulation, with an increase of 47% per 30° of angulation, and body mass index (BMI), with an increase of 58% for every 5-point increase in BMI. Based on these models, a patient with a BMI of 30 kg/m 2 , combined with 45° of rotation and a field size of 800 cm 2 in the medium fluoroscopy protocol has a predicted DR AK of 0.39 mGy/s (or 85.5 minutes until the 2-Gy skin threshold is reached). While using comparable settings but switching the acquisition to a DSA with a "2 frames per second" protocol, the predicted DR AK will be 6.6 mGy/s (or 5.0 minutes until the 2-Gy threshold is reached). X-ray radiation DRs are constantly fluctuating during and between patients based on BMI, the protocols, C-arm position, and the image acquisitions that are used. An instant patient risk chart visualizes these radiation dose fluctuations and provides an overview of the expected duration of X-ray radiation, which can be used to predict when follow-up dose thresholds are reached during abdominal endovascular procedures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3388460','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3388460"><span>Is blood pressure reduction a valid surrogate endpoint for stroke prevention? an analysis incorporating a systematic review of randomised controlled trials, a by-trial weighted errors-in-variables regression, the surrogate threshold effect (STE) and the biomarker-surrogacy (BioSurrogate) evaluation schema (BSES)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Blood pressure is considered to be a leading example of a valid surrogate endpoint. The aims of this study were to (i) formally evaluate systolic and diastolic blood pressure reduction as a surrogate endpoint for stroke prevention and (ii) determine what blood pressure reduction would predict a stroke benefit. Methods We identified randomised trials of at least six months duration comparing any pharmacologic anti-hypertensive treatment to placebo or no treatment, and reporting baseline blood pressure, on-trial blood pressure, and fatal and non-fatal stroke. Trials with fewer than five strokes in at least one arm were excluded. Errors-in-variables weighted least squares regression modelled the reduction in stroke as a function of systolic blood pressure reduction and diastolic blood pressure reduction respectively. The lower 95% prediction band was used to determine the minimum systolic blood pressure and diastolic blood pressure difference, the surrogate threshold effect (STE), below which there would be no predicted stroke benefit. The STE was used to generate the surrogate threshold effect proportion (STEP), a surrogacy metric, which with the R-squared trial-level association was used to evaluate blood pressure as a surrogate endpoint for stroke using the Biomarker-Surrogacy Evaluation Schema (BSES3). Results In 18 qualifying trials representing all pharmacologic drug classes of antihypertensives, assuming a reliability coefficient of 0.9, the surrogate threshold effect for a stroke benefit was 7.1 mmHg for systolic blood pressure and 2.4 mmHg for diastolic blood pressure. The trial-level association was 0.41 and 0.64 and the STEP was 66% and 78% for systolic and diastolic blood pressure respectively. The STE and STEP were more robust to measurement error in the independent variable than R-squared trial-level associations. Using the BSES3, assuming a reliability coefficient of 0.9, systolic blood pressure was a B + grade and diastolic blood pressure was an A grade surrogate endpoint for stroke prevention. In comparison, using the same stroke data sets, no STEs could be estimated for cardiovascular (CV) mortality or all-cause mortality reduction, although the STE for CV mortality approached 25 mmHg for systolic blood pressure. Conclusions In this report we provide the first surrogate threshold effect (STE) values for systolic and diastolic blood pressure. We suggest the STEs have face and content validity, evidenced by the inclusivity of trial populations, subject populations and pharmacologic intervention populations in their calculation. We propose that the STE and STEP metrics offer another method of evaluating the evidence supporting surrogate endpoints. We demonstrate how surrogacy evaluations are strengthened if formally evaluated within specific-context evaluation frameworks using the Biomarker- Surrogate Evaluation Schema (BSES3), and we discuss the implications of our evaluation of blood pressure on other biomarkers and patient-reported instruments in relation to surrogacy metrics and trial design. PMID:22409774</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22220527-trapping-volumetric-measurement-multidetector-ct-chronic-obstructive-pulmonary-disease-effect-ct-threshold','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22220527-trapping-volumetric-measurement-multidetector-ct-chronic-obstructive-pulmonary-disease-effect-ct-threshold"><span>Trapping volumetric measurement by multidetector CT in chronic obstructive pulmonary disease: Effect of CT threshold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Xiaohua; Yuan, Huishu; Duan, Jianghui</p> <p>2013-08-15</p> <p>Purpose: The purpose of this study was to evaluate the effect of various computed tomography (CT) thresholds on trapping volumetric measurements by multidetector CT in chronic obstructive pulmonary disease (COPD).Methods: Twenty-three COPD patients were scanned with a 64-slice CT scanner in both the inspiratory and expiratory phase. CT thresholds of −950 Hu in inspiration and −950 to −890 Hu in expiration were used, after which trapping volumetric measurements were made using computer software. Trapping volume percentage (Vtrap%) under the different CT thresholds in the expiratory phase and below −950 Hu in the inspiratory phase was compared and correlated with lungmore » function.Results: Mean Vtrap% was similar under −930 Hu in the expiratory phase and below −950 Hu in the inspiratory phase, being 13.18 ± 9.66 and 13.95 ± 6.72 (both lungs), respectively; this difference was not significant (P= 0.240). Vtrap% under −950 Hu in the inspiratory phase and below the −950 to −890 Hu threshold in the expiratory phase was moderately negatively correlated with the ratio of forced expiratory volume in one second to forced vital capacity and the measured value of forced expiratory volume in one second as a percentage of the predicted value.Conclusions: Trapping volumetric measurement with multidetector CT is a promising method for the quantification of COPD. It is important to know the effect of various CT thresholds on trapping volumetric measurements.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22729482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22729482"><span>The relationship between Cho/NAA and glioma metabolism: implementation for margin delineation of cerebral gliomas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Jun; Yao, Chengjun; Chen, Hong; Zhuang, Dongxiao; Tang, Weijun; Ren, Guang; Wang, Yin; Wu, Jinsong; Huang, Fengping; Zhou, Liangfu</p> <p>2012-08-01</p> <p>The marginal delineation of gliomas cannot be defined by conventional imaging due to their infiltrative growth pattern. Here we investigate the relationship between changes in glioma metabolism by proton magnetic resonance spectroscopic imaging ((1)H-MRSI) and histopathological findings in order to determine an optimal threshold value of choline/N-acetyl-aspartate (Cho/NAA) that can be used to define the extent of glioma spread. Eighteen patients with different grades of glioma were examined using (1)H-MRSI. Needle biopsies were performed under the guidance of neuronavigation prior to craniotomy. Intraoperative magnetic resonance imaging (MRI) was performed to evaluate the accuracy of sampling. Haematoxylin and eosin, and immunohistochemical staining with IDH1, MIB-1, p53, CD34 and glial fibrillary acidic protein (GFAP) antibodies were performed on all samples. Logistic regression analysis was used to determine the relationship between Cho/NAA and MIB-1, p53, CD34, and the degree of tumour infiltration. The clinical threshold ratio distinguishing tumour tissue in high-grade (grades III and IV) glioma (HGG) and low-grade (grade II) glioma (LGG) was calculated. In HGG, higher Cho/NAA ratios were associated with a greater probability of higher MIB-1 counts, stronger CD34 expression, and tumour infiltration. Ratio threshold values of 0.5, 1.0, 1.5 and 2.0 appeared to predict the specimens containing the tumour with respective probabilities of 0.38, 0.60, 0.79, 0.90 in HGG and 0.16, 0.39, 0.67, 0.87 in LGG. HGG and LGG exhibit different spectroscopic patterns. Using (1)H-MRSI to guide the extent of resection has the potential to improve the clinical outcome of glioma surgery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27943397','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27943397"><span>Early prediction of thiopurine-induced hepatotoxicity in inflammatory bowel disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wong, D R; Coenen, M J H; Derijks, L J J; Vermeulen, S H; van Marrewijk, C J; Klungel, O H; Scheffer, H; Franke, B; Guchelaar, H-J; de Jong, D J; Engels, L G J B; Verbeek, A L M; Hooymans, P M</p> <p>2017-02-01</p> <p>Hepatotoxicity, gastrointestinal complaints and general malaise are common limiting adverse reactions of azathioprine and mercaptopurine in IBD patients, often related to high steady-state 6-methylmercaptopurine ribonucleotide (6-MMPR) metabolite concentrations. To determine the predictive value of 6-MMPR concentrations 1 week after treatment initiation (T1) for the development of these adverse reactions, especially hepatotoxicity, during the first 20 weeks of treatment. The cohort study consisted of the first 270 IBD patients starting thiopurine treatment as part of the Dutch randomised-controlled trial evaluating pre-treatment thiopurine S-methyltransferase genotype testing (ClinicalTrials.gov NCT00521950). Blood samples for metabolite assessment were collected at T1. Hepatotoxicity was defined by alanine aminotransaminase elevations >2 times the upper normal limit or a ratio of alanine aminotransaminase/alkaline phosphatase ≥5. Forty-seven patients (17%) presented hepatotoxicity during the first 20 weeks of thiopurine treatment. A T1 6-MMPR threshold of 3615 pmol/8 × 10 8 erythrocytes was defined. Analysis of patients on stable thiopurine dose (n = 174) showed that those exceeding the 6-MMPR threshold were at increased risk of hepatotoxicity: OR = 3.8 (95% CI: 1.8-8.0). Age, male gender and BMI were significant determinants. A predictive algorithm was developed based on these determinants and the 6-MMPR threshold to assess hepatotoxicity risk [AUC = 0.83 (95% CI: 0.75-0.91)]. 6-MMPR concentrations above the threshold also correlated with gastrointestinal complaints: OR = 2.4 (95% CI: 1.4-4.3), and general malaise: OR = 2.0 (95% CI: 1.1-3.7). In more than 80% of patients, thiopurine-induced hepatotoxicity could be explained by elevated T1 6-MMPR concentrations and the independent risk factors age, gender and BMI, allowing personalised thiopurine treatment in IBD to prevent early failure. © 2016 John Wiley & Sons Ltd.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27037431','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27037431"><span>The pre-operative levels of haemoglobin in the blood can be used to predict the risk of allogenic blood transfusion after total knee arthroplasty.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maempel, J F; Wickramasinghe, N R; Clement, N D; Brenkel, I J; Walmsley, P J</p> <p>2016-04-01</p> <p>The pre-operative level of haemoglobin is the strongest predictor of the peri-operative requirement for blood transfusion after total knee arthroplasty (TKA). There are, however, no studies reporting a value that could be considered to be appropriate pre-operatively. This study aimed to identify threshold pre-operative levels of haemoglobin that would predict the requirement for blood transfusion in patients who undergo TKA. Analysis of receiver operator characteristic (ROC) curves of 2284 consecutive patients undergoing unilateral TKA was used to determine gender specific thresholds predicting peri-operative transfusion with the highest combined sensitivity and specificity (area under ROC curve 0.79 for males; 0.78 for females). Threshold levels of 13.75 g/dl for males and 12.75 g/dl for females were identified. The rates of transfusion in males and females, respectively above these levels were 3.37% and 7.11%, while below these levels, they were 16.13% and 28.17%. Pre-operative anaemia increased the rate of transfusion by 6.38 times in males and 6.27 times in females. Blood transfusion was associated with an increased incidence of early post-operative confusion (odds ratio (OR) = 3.44), cardiac arrhythmia (OR = 5.90), urinary catheterisation (OR = 1.60), the incidence of deep infection (OR = 4.03) and mortality (OR = 2.35) one year post-operatively, and increased length of stay (eight days vs six days, p < 0.001). Uncorrected low pre-operative levels of haemoglobin put patients at potentially modifiable risk and attempts should be made to correct this before TKA. Target thresholds for the levels of haemoglobin pre-operatively in males and females are proposed. Low pre-operative haemoglobin levels put patients at unnecessary risk and should be corrected prior to surgery. ©2016 The British Editorial Society of Bone & Joint Surgery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/958680','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/958680"><span>Method and apparatus to predict the remaining service life of an operating system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.</p> <p>2008-11-25</p> <p>A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090035896','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090035896"><span>LDPC Codes with Minimum Distance Proportional to Block Size</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy</p> <p>2009-01-01</p> <p>Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27131489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27131489"><span>Epidemic spreading with activity-driven awareness diffusion on multiplex network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming</p> <p>2016-04-01</p> <p>There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016Chaos..26d3110G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016Chaos..26d3110G"><span>Epidemic spreading with activity-driven awareness diffusion on multiplex network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming</p> <p>2016-04-01</p> <p>There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26623966','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26623966"><span>PSYCHIATRIC COMORBIDITY DOES NOT ONLY DEPEND ON DIAGNOSTIC THRESHOLDS: AN ILLUSTRATION WITH MAJOR DEPRESSIVE DISORDER AND GENERALIZED ANXIETY DISORDER.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Loo, Hanna M; Schoevers, Robert A; Kendler, Kenneth S; de Jonge, Peter; Romeijn, Jan-Willem</p> <p>2016-02-01</p> <p>High rates of psychiatric comorbidity are subject of debate: To what extent do they depend on classification choices such as diagnostic thresholds? This paper investigates the influence of different thresholds on rates of comorbidity between major depressive disorder (MDD) and generalized anxiety disorder (GAD). Point prevalence of comorbidity between MDD and GAD was measured in 74,092 subjects from the general population (LifeLines) according to Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) criteria. Comorbidity rates were compared for different thresholds by varying the number of necessary criteria from ≥ 1 to all nine symptoms for MDD, and from ≥ 1 to all seven symptoms for GAD. According to DSM thresholds, 0.86% had MDD only, 2.96% GAD only, and 1.14% both MDD and GAD (odds ratio (OR) 42.6). Lower thresholds for MDD led to higher rates of comorbidity (1.44% for ≥ 4 of nine MDD symptoms, OR 34.4), whereas lower thresholds for GAD hardly influenced comorbidity (1.16% for ≥ 3 of seven GAD symptoms, OR 38.8). Specific patterns in the distribution of symptoms within the population explained this finding: 37.3% of subjects with core criteria of MDD and GAD reported subthreshold MDD symptoms, whereas only 7.6% reported subthreshold GAD symptoms. Lower thresholds for MDD increased comorbidity with GAD, but not vice versa, owing to specific symptom patterns in the population. Generally, comorbidity rates result from both empirical symptom distributions and classification choices and cannot be reduced to either of these exclusively. This insight invites further research into the formation of disease concepts that allow for reliable predictions and targeted therapeutic interventions. © 2015 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70045135','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70045135"><span>Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.</p> <p>2012-01-01</p> <p>Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5383032','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5383032"><span>Developing a synthetic national population to investigate the impact of different cardiovascular disease risk management strategies: A derivation and validation study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jackson, Rod</p> <p>2017-01-01</p> <p>Background Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients’ multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country’s total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. Methods and results A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30–84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each ‘synthetic’ person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. Conclusions We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere. PMID:28384217</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1941757','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1941757"><span>Mechanisms of Firing Patterns in Fast-Spiking Cortical Interneurons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Golomb, David; Donner, Karnit; Shacham, Liron; Shlosberg, Dan; Amitai, Yael; Hansel, David</p> <p>2007-01-01</p> <p>Cortical fast-spiking (FS) interneurons display highly variable electrophysiological properties. Their spike responses to step currents occur almost immediately following the step onset or after a substantial delay, during which subthreshold oscillations are frequently observed. Their firing patterns include high-frequency tonic firing and rhythmic or irregular bursting (stuttering). What is the origin of this variability? In the present paper, we hypothesize that it emerges naturally if one assumes a continuous distribution of properties in a small set of active channels. To test this hypothesis, we construct a minimal, single-compartment conductance-based model of FS cells that includes transient Na+, delayed-rectifier K+, and slowly inactivating d-type K+ conductances. The model is analyzed using nonlinear dynamical system theory. For small Na+ window current, the neuron exhibits high-frequency tonic firing. At current threshold, the spike response is almost instantaneous for small d-current conductance, g d, and it is delayed for larger g d. As g d further increases, the neuron stutters. Noise substantially reduces the delay duration and induces subthreshold oscillations. In contrast, when the Na+ window current is large, the neuron always fires tonically. Near threshold, the firing rates are low, and the delay to firing is only weakly sensitive to noise; subthreshold oscillations are not observed. We propose that the variability in the response of cortical FS neurons is a consequence of heterogeneities in their g d and in the strength of their Na+ window current. We predict the existence of two types of firing patterns in FS neurons, differing in the sensitivity of the delay duration to noise, in the minimal firing rate of the tonic discharge, and in the existence of subthreshold oscillations. We report experimental results from intracellular recordings supporting this prediction. PMID:17696606</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17696606','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17696606"><span>Mechanisms of firing patterns in fast-spiking cortical interneurons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Golomb, David; Donner, Karnit; Shacham, Liron; Shlosberg, Dan; Amitai, Yael; Hansel, David</p> <p>2007-08-01</p> <p>Cortical fast-spiking (FS) interneurons display highly variable electrophysiological properties. Their spike responses to step currents occur almost immediately following the step onset or after a substantial delay, during which subthreshold oscillations are frequently observed. Their firing patterns include high-frequency tonic firing and rhythmic or irregular bursting (stuttering). What is the origin of this variability? In the present paper, we hypothesize that it emerges naturally if one assumes a continuous distribution of properties in a small set of active channels. To test this hypothesis, we construct a minimal, single-compartment conductance-based model of FS cells that includes transient Na(+), delayed-rectifier K(+), and slowly inactivating d-type K(+) conductances. The model is analyzed using nonlinear dynamical system theory. For small Na(+) window current, the neuron exhibits high-frequency tonic firing. At current threshold, the spike response is almost instantaneous for small d-current conductance, gd, and it is delayed for larger gd. As gd further increases, the neuron stutters. Noise substantially reduces the delay duration and induces subthreshold oscillations. In contrast, when the Na(+) window current is large, the neuron always fires tonically. Near threshold, the firing rates are low, and the delay to firing is only weakly sensitive to noise; subthreshold oscillations are not observed. We propose that the variability in the response of cortical FS neurons is a consequence of heterogeneities in their gd and in the strength of their Na(+) window current. We predict the existence of two types of firing patterns in FS neurons, differing in the sensitivity of the delay duration to noise, in the minimal firing rate of the tonic discharge, and in the existence of subthreshold oscillations. We report experimental results from intracellular recordings supporting this prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28384217','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28384217"><span>Developing a synthetic national population to investigate the impact of different cardiovascular disease risk management strategies: A derivation and validation study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Knight, Josh; Wells, Susan; Marshall, Roger; Exeter, Daniel; Jackson, Rod</p> <p>2017-01-01</p> <p>Many national cardiovascular disease (CVD) risk factor management guidelines now recommend that drug treatment decisions should be informed primarily by patients' multi-variable predicted risk of CVD, rather than on the basis of single risk factor thresholds. To investigate the potential impact of treatment guidelines based on CVD risk thresholds at a national level requires individual level data representing the multi-variable CVD risk factor profiles for a country's total adult population. As these data are seldom, if ever, available, we aimed to create a synthetic population, representing the joint CVD risk factor distributions of the adult New Zealand population. A synthetic population of 2,451,278 individuals, representing the actual age, gender, ethnicity and social deprivation composition of people aged 30-84 years who completed the 2013 New Zealand census was generated using Monte Carlo sampling. Each 'synthetic' person was then probabilistically assigned values of the remaining cardiovascular disease (CVD) risk factors required for predicting their CVD risk, based on data from the national census national hospitalisation and drug dispensing databases and a large regional cohort study, using Monte Carlo sampling and multiple imputation. Where possible, the synthetic population CVD risk distributions for each non-demographic risk factor were validated against independent New Zealand data sources. We were able to develop a synthetic national population with realistic multi-variable CVD risk characteristics. The construction of this population is the first step in the development of a micro-simulation model intended to investigate the likely impact of a range of national CVD risk management strategies that will inform CVD risk management guideline updates in New Zealand and elsewhere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23799160','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23799160"><span>Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa; Chung, Won-Ho</p> <p>2013-06-01</p> <p>Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MMTA...45..287C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MMTA...45..287C"><span>A Microstructure-Based Time-Dependent Crack Growth Model for Life and Reliability Prediction of Turbopropulsion Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.</p> <p>2014-01-01</p> <p>The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26105940','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26105940"><span>PP087. Multicenter external validation and recalibration of a model for preconceptional prediction of recurrent early-onset preeclampsia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Kuijk, Sander; Delahaije, Denise; Dirksen, Carmen; Scheepers, Hubertina C J; Spaanderman, Marc; Ganzevoort, W; Duvekot, Hans; Oudijk, M A; van Pampus, M G; Dadelszen, Peter von; Peeters, Louis L; Smiths, Luc</p> <p>2013-04-01</p> <p>In an earlier paper we reported on the development of a model aimed at the prediction of preeclampsia recurrence, based on variables obtained before the next pregnancy (fasting glucose, BMI, previous birth of a small-for-gestational-age infant, duration of the previous pregnancy, and the presence of hypertension). To externally validate and recalibrate the prediction model for the risk of recurrence of early-onset preeclampsia. We collected data about course and outcome of the next ongoing pregnancy in 229 women with a history of early-onset preeclampsia. Recurrence was defined as preeclampsia requiring delivery before 34 weeks. We computed risk of recurrence and assessed model performance. In addition, we constructed a table comparing sensitivity, specificity, and predictive values for different suggested risk-thresholds. Early-onset preeclampsia recurred in 6.6% of women. The model systematically underestimated recurrence risk. The model's discriminative ability was modest, the area under the receiver operating characteristic curve was 58.9% (95% CI: 45.1 - 72.7). Using relevant risk-thresholds, the model created groups that were only moderately different in terms of their average risk of recurrent preeclampsia (Table 1). Compared to an AUC of 65% in the development cohort, the discriminate ability of the model was diminished. It had inadequate performance to classify women into clinically relevant risk groups. Copyright © 2013. Published by Elsevier B.V.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19282302','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19282302"><span>Steroid profiles of professional soccer players: an international comparative study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Strahm, E; Sottas, P-E; Schweizer, C; Saugy, M; Dvorak, J; Saudan, C</p> <p>2009-12-01</p> <p>Urinary steroid profiling is used in doping controls to detect testosterone abuse. A testosterone over epitestosterone (T/E) ratio exceeding 4.0 is considered as suspicious of testosterone administration, irrespectively of individual heterogeneous factors such as the athlete's ethnicity. A deletion polymorphism in the UGT2B17 gene was demonstrated to account for a significant part of the interindividual variability in the T/E between Caucasians and Asians. Here, the variability of urinary steroid profiles was examined in a widely heterogeneous cohort of professional soccer players. The steroid profile of 57 Africans, 32 Asians, 50 Caucasians and 32 Hispanics was determined by gas chromatography-mass spectrometry. Significant differences have been observed between all ethnic groups. After estimation of the prevalence of the UGT2B17 deletion/deletion genotype (African: 22%; Asian: 81%; Caucasian: 10%; Hispanic: 7%), ethnic-specific thresholds were developed for a specificity of 99% for the T/E (African: 5.6; Asian: 3.8; Caucasian: 5.7; Hispanic: 5.8). Finally, another polymorphism could be hypothesised in Asians based on specific concentration ratio of 5alpha-/5beta-androstane-3alpha,17beta-diol in urine. These results demonstrate that a unique and non-specific threshold to evidence testosterone misuse is not fit for purpose. An athlete's endocrinological passport consisting of a longitudinal follow-up together with the ethnicity and/or the genotype would strongly enhance the detection of testosterone abuse. Finally, additional genotyping studies should be undertaken to determine whether the remaining unexplained disparities have an environmental or a genetic origin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29396637','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29396637"><span>Amyloid and tau signatures of brain metabolic decline in preclinical Alzheimer's disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pascoal, Tharick A; Mathotaarachchi, Sulantha; Shin, Monica; Park, Ah Yeon; Mohades, Sara; Benedet, Andrea L; Kang, Min Su; Massarweh, Gassan; Soucy, Jean-Paul; Gauthier, Serge; Rosa-Neto, Pedro</p> <p>2018-06-01</p> <p>We aimed to determine the amyloid (Aβ) and tau biomarker levels associated with imminent Alzheimer's disease (AD) - related metabolic decline in cognitively normal individuals. A threshold analysis was performed in 120 cognitively normal elderly individuals by modelling 2-year declines in brain glucose metabolism measured with [ 18 F]fluorodeoxyglucose ([ 18 F]FDG) as a function of [ 18 F]florbetapir Aβ positron emission tomography (PET) and cerebrospinal fluid phosphorylated tau biomarker thresholds. Additionally, using a novel voxel-wise analytical framework, we determined the sample sizes needed to test an estimated 25% drugeffect with 80% of power on changes in FDG uptake over 2 years at every brain voxel. The combination of [ 18 F]florbetapir standardized uptake value ratios and phosphorylated-tau levels more than one standard deviation higher than their respective thresholds for biomarker abnormality was the best predictor of metabolic decline in individuals with preclinical AD. We also found that a clinical trial using these thresholds would require as few as 100 individuals to test a 25% drug effect on AD-related metabolic decline over 2 years. These results highlight the new concept that combined Aβ and tau thresholds can predict imminent neurodegeneration as an alternative framework with a high statistical power for testing the effect of disease-modifying therapies on [ 18 F]FDG uptake decline over a typical 2-year clinical trial period in individuals with preclinical AD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19855338','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19855338"><span>Noninvasive determination of anaerobic threshold by monitoring the %SpO2 changes and respiratory gas exchange.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nikooie, Roohollah; Gharakhanlo, Reza; Rajabi, Hamid; Bahraminegad, Morteza; Ghafari, Ali</p> <p>2009-10-01</p> <p>The purpose of this study was to determine the validity of noninvasive anaerobic threshold (AT) estimation using %SpO2 (arterial oxyhemoglobin saturation) changes and respiratory gas exchanges. Fifteen active, healthy males performed 2 graded exercise tests on a motor-driven treadmill in 2 separated sessions. Respiratory gas exchanges and heart rate (HR), lactate concentration, and %SpO2 were measured continuously throughout the test. Anaerobic threshold was determined based on blood lactate concentration (lactate-AT), %SpO2 changes (%SpO2-AT), respiratory exchange ratio (RER-AT), V-slope method (V-slope-AT), and ventilatory equivalent for O2 (EqO2-AT). Blood lactate measuring was considered as gold standard assessment of AT and was applied to confirm the validity of other noninvasive methods. The mean O2 corresponding to lactate-AT, %SpO2-AT, RER-AT, V-slope -AT, and EqO2-AT were 2176.6 +/- 206.4, 1909.5 +/- 221.4, 2141.2 +/- 245.6, 1933.7 +/- 216.4, and 1975 +/- 232.4, respectively. Intraclass correlation coefficient (ICC) analysis indicates a significant correlation between 4 noninvasive methods and the criterion method. Blond-Altman plots showed the good agreement between O2 corresponding to AT in each method and lactate-AT (95% confidence interval (CI). Our results indicate that a noninvasive and easy procedure of monitoring the %SpO2 is a valid method for estimation of AT. Also, in the present study, the respiratory exchange ratio (RER) method seemed to be the best respiratory index for noninvasive estimation of anaerobic threshold, and the heart rate corresponding to AT predicted by this method can be used by coaches and athletes to define training zones.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H13C1551B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H13C1551B"><span>Beyond the SCS curve number: A new stochastic spatial runoff approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.</p> <p>2015-12-01</p> <p>The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29931633','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29931633"><span>Prediction and optimization of CI engine performance fuelled with Calophyllum inophyllum diesel blend using response surface methodology (RSM).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Venugopal, Paramaguru; Kasimani, Ramesh; Chinnasamy, Suresh</p> <p>2018-06-21</p> <p>The transportation demand in India is increasing tremendously, which arouses the energy consumption by 4.1 to 6.1% increases each year from 2010 to 2050. In addition, the private vehicle ownership keeps on increasing almost 10% per year during the last decade and reaches 213 million tons of oil consumption in 2016. Thus, this makes India the third largest importer of crude oil in the world. Because of this problem, there is a need of promoting the alternative fuels (biodiesel) which are from different feedstocks for the transportation. This alternative fuel has better emission characteristics compared to neat diesel, hence the biodiesel can be used as direct alternative for diesel and it can also be blended with diesel to get better performance. However, the effect of compression ratio, injection timing, injection pressure, composition-blend ratio and air-fuel ratio, and the shape of the cylinder may affect the performance and emission characteristics of the diesel engine. This article deals with the effect of compression ratio in the performance of the engine while using Honne oil diesel blend and also to find out the optimum compression ratio. So the experimentations are conducted using Honne oil diesel blend-fueled CI engine at variable load conditions and at constant speed operations. In order to find out the optimum compression ratio, experiments are carried out on a single-cylinder, four-stroke variable compression ratio diesel engine, and it is found that 18:1 compression ratio gives better performance than the lower compression ratios. Engine performance tests were carried out at different compression ratio values. Using experimental data, regression model was developed and the values were predicted using response surface methodology. Then the predicted values were validated with the experimental results and a maximum error percentage of 6.057 with an average percentage of error as 3.57 were obtained. The optimum numeric factors for different responses were also selected using RSM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4556702','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4556702"><span>Sparse Zero-Sum Games as Stable Functional Feature Selection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sokolovska, Nataliya; Teytaud, Olivier; Rizkalla, Salwa; Clément, Karine; Zucker, Jean-Daniel</p> <p>2015-01-01</p> <p>In large-scale systems biology applications, features are structured in hidden functional categories whose predictive power is identical. Feature selection, therefore, can lead not only to a problem with a reduced dimensionality, but also reveal some knowledge on functional classes of variables. In this contribution, we propose a framework based on a sparse zero-sum game which performs a stable functional feature selection. In particular, the approach is based on feature subsets ranking by a thresholding stochastic bandit. We provide a theoretical analysis of the introduced algorithm. We illustrate by experiments on both synthetic and real complex data that the proposed method is competitive from the predictive and stability viewpoints. PMID:26325268</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29772077','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29772077"><span>A feature alignment score for online cone-beam CT-based image-guided radiotherapy for prostate cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hargrave, Catriona; Deegan, Timothy; Poulsen, Michael; Bednarz, Tomasz; Harden, Fiona; Mengersen, Kerrie</p> <p>2018-05-17</p> <p>To develop a method for scoring online cone-beam CT (CBCT)-to-planning CT image feature alignment to inform prostate image-guided radiotherapy (IGRT) decision-making. The feasibility of incorporating volume variation metric thresholds predictive of delivering planned dose into weighted functions, was investigated. Radiation therapists and radiation oncologists participated in workshops where they reviewed prostate CBCT-IGRT case examples and completed a paper-based survey of image feature matching practices. For 36 prostate cancer patients, one daily CBCT was retrospectively contoured then registered with their plan to simulate delivered dose if (a) no online setup corrections and (b) online image alignment and setup corrections, were performed. Survey results were used to select variables for inclusion in classification and regression tree (CART) and boosted regression trees (BRT) modeling of volume variation metric thresholds predictive of delivering planned dose to the prostate, proximal seminal vesicles (PSV), bladder, and rectum. Weighted functions incorporating the CART and BRT results were used to calculate a score of individual tumor and organ at risk image feature alignment (FAS TV _ OAR ). Scaled and weighted FAS TV _ OAR were then used to calculate a score of overall treatment compliance (FAS global ) for a given CBCT-planning CT registration. The FAS TV _ OAR were assessed for sensitivity, specificity, and predictive power. FAS global thresholds indicative of high, medium, or low overall treatment plan compliance were determined using coefficients from multiple linear regression analysis. Thirty-two participants completed the prostate CBCT-IGRT survey. While responses demonstrated consensus of practice for preferential ranking of planning CT and CBCT match features in the presence of deformation and rotation, variation existed in the specified thresholds for observed volume differences requiring patient repositioning or repeat bladder and bowel preparation. The CART and BRT modeling indicated that for a given registration, a Dice similarity coefficient >0.80 and >0.60 for the prostate and PSV, respectively, and a maximum Hausdorff distance <8.0 mm for both structures were predictive of delivered dose ± 5% of planned dose. A normalized volume difference <1.0 and a CBCT anterior rectum wall >1.0 mm anterior to the planning CT anterior rectum wall were predictive of delivered dose >5% of planned rectum dose. A normalized volume difference <0.88, and a CBCT bladder wall >13.5 mm inferior and >5.0 mm posterior to the planning CT bladder were predictive of delivered dose >5% of planned bladder dose. A FAS TV _ OAR >0 is indicative of delivery of planned dose. For calculated FAS TV _ OAR for the prostate, PSV, bladder, and rectum using test data, sensitivity was 0.56, 0.75, 0.89, and 1.00, respectively; specificity 0.90, 0.94, 0.59, and 1.00, respectively; positive predictive power 0.90, 0.86, 0.53, and 1.00, respectively; and negative predictive power 0.56, 0.89, 0.91, and 1.00, respectively. Thresholds for the calculated FAS global of were low <60, medium 60-80, and high >80, with a 27% misclassification rate for the test data. A FAS global incorporating nested FAS TV _ OAR and volume variation metric thresholds predictive of treatment plan compliance was developed, offering an alternative to pretreatment dose calculations to assess treatment delivery accuracy. © 2018 American Association of Physicists in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.tmp..179L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.tmp..179L"><span>Inter-decadal change in potential predictability of the East Asian summer monsoon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Jiao; Ding, Ruiqiang; Wu, Zhiwei; Zhong, Quanjia; Li, Baosheng; Li, Jianping</p> <p>2018-05-01</p> <p>The significant inter-decadal change in potential predictability of the East Asian summer monsoon (EASM) has been investigated using the signal-to-noise ratio method. The relatively low potential predictability appears from the early 1950s through the late 1970s and during the early 2000s, whereas the potential predictability is relatively high from the early 1980s through the late 1990s. The inter-decadal change in potential predictability of the EASM can be attributed mainly to variations in the external signal of the EASM. The latter is mostly caused by the El Niño-Southern Oscillation (ENSO) inter-decadal variability. As a major external signal of the EASM, the ENSO inter-decadal variability experiences phase transitions from negative to positive phases in the late 1970s, and to negative phases in the late 1990s. Additionally, ENSO is generally strong (weak) during a positive (negative) phase of the ENSO inter-decadal variability. The strong ENSO is expected to have a greater influence on the EASM, and vice versa. As a result, the potential predictability of the EASM tends to be high (low) during a positive (negative) phase of the ENSO inter-decadal variability. Furthermore, a suite of Pacific Pacemaker experiments suggests that the ENSO inter-decadal variability may be a key pacemaker of the inter-decadal change in potential predictability of the EASM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15818098','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15818098"><span>Clinical evaluation of an inspiratory impedance threshold device during standard cardiopulmonary resuscitation in patients with out-of-hospital cardiac arrest.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aufderheide, Tom P; Pirrallo, Ronald G; Provo, Terry A; Lurie, Keith G</p> <p>2005-04-01</p> <p>To determine whether an impedance threshold device, designed to enhance circulation, would increase acute resuscitation rates for patients in cardiac arrest receiving conventional manual cardiopulmonary resuscitation. Prospective, randomized, double-blind, intention-to-treat. Out-of-hospital trial conducted in the Milwaukee, WI, emergency medical services system. Adults in cardiac arrest of presumed cardiac etiology. On arrival of advanced life support, patients were treated with standard cardiopulmonary resuscitation combined with either an active or a sham impedance threshold device. We measured safety and efficacy of the impedance threshold device; the primary end point was intensive care unit admission. Statistical analyses performed included the chi-square test and multivariate regression analysis. One hundred sixteen patients were treated with a sham impedance threshold device, and 114 patients were treated with an active impedance threshold device. Overall intensive care unit admission rates were 17% with the sham device vs. 25% in the active impedance threshold device (p = .13; odds ratio, 1.64; 95% confidence interval, 0.87, 3.10). Patients in the subgroup presenting with pulseless electrical activity had intensive care unit admission and 24-hr survival rates of 20% and 12% in sham (n = 25) vs. 52% and 30% in active impedance threshold device groups (n = 27) (p = .018, odds ratio, 4.31; 95% confidence interval, 1.28, 14.5, and p = .12, odds ratio, 3.09; 95% confidence interval, 0.74, 13.0, respectively). A post hoc analysis of patients with pulseless electrical activity at any time during the cardiac arrest revealed that intensive care unit and 24-hr survival rates were 20% and 11% in the sham (n = 56) vs. 41% and 27% in the active impedance threshold device groups (n = 49) (p = .018, odds ratio, 2.82; 95% confidence interval, 1.19, 6.67, and p = .037, odds ratio, 3.01; 95% confidence interval, 1.07, 8.96, respectively). There were no statistically significant differences in outcomes for patients presenting in ventricular fibrillation and asystole. Adverse event and complication rates were also similar. During this first clinical trial of the impedance threshold device during standard cardiopulmonary resuscitation, use of the new device more than doubled short-term survival rates in patients presenting with pulseless electrical activity. A larger clinical trial is underway to determine the potential longer term benefits of the impedance threshold device in cardiac arrest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24406021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24406021"><span>An association between neighbourhood wealth inequality and HIV prevalence in sub-Saharan Africa.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brodish, Paul Henry</p> <p>2015-05-01</p> <p>This paper investigates whether community-level wealth inequality predicts HIV serostatus using DHS household survey and HIV biomarker data for men and women ages 15-59 pooled from six sub-Saharan African countries with HIV prevalence rates exceeding 5%. The analysis relates the binary dependent variable HIV-positive serostatus and two weighted aggregate predictors generated from the DHS Wealth Index: the Gini coefficient, and the ratio of the wealth of households in the top 20% wealth quintile to that of those in the bottom 20%. In separate multilevel logistic regression models, wealth inequality is used to predict HIV prevalence within each statistical enumeration area, controlling for known individual-level demographic predictors of HIV serostatus. Potential individual-level sexual behaviour mediating variables are added to assess attenuation, and ordered logit models investigate whether the effect is mediated through extramarital sexual partnerships. Both the cluster-level wealth Gini coefficient and wealth ratio significantly predict positive HIV serostatus: a 1 point increase in the cluster-level Gini coefficient and in the cluster-level wealth ratio is associated with a 2.35 and 1.3 times increased likelihood of being HIV positive, respectively, controlling for individual-level demographic predictors, and associations are stronger in models including only males. Adding sexual behaviour variables attenuates the effects of both inequality measures. Reporting eleven plus lifetime sexual partners increases the odds of being HIV positive over five-fold. The likelihood of having more extramarital partners is significantly higher in clusters with greater wealth inequality measured by the wealth ratio. Disaggregating logit models by sex indicates important risk behaviour differences. Household wealth inequality within DHS clusters predicts HIV serostatus, and the relationship is partially mediated by more extramarital partners. These results emphasize the importance of incorporating higher-level contextual factors, investigating behavioural mediators, and disaggregating by sex in assessing HIV risk in order to uncover potential mechanisms of action and points of preventive intervention.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4852138','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4852138"><span>An association between neighborhood wealth inequality and HIV prevalence in sub-Saharan Africa</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brodish, Paul Henry</p> <p>2016-01-01</p> <p>Summary This paper investigates whether community-level wealth inequality predicts HIV serostatus, using DHS household survey and HIV biomarker data for men and women ages 15-59 pooled from six sub-Saharan African countries with HIV prevalence rates exceeding five percent. The analysis relates the binary dependent variable HIV positive serostatus and two weighted aggregate predictors generated from the DHS Wealth Index: the Gini coefficient, and the ratio of the wealth of households in the top 20% wealth quintile to that of those in the bottom 20%. In separate multilevel logistic regression models, wealth inequality is used to predict HIV prevalence within each SEA, controlling for known individual-level demographic predictors of HIV serostatus. Potential individual-level sexual behavior mediating variables are added to assess attenuation, and ordered logit models investigate whether the effect is mediated through extramarital sexual partnerships. Both the cluster-level wealth Gini coefficient and wealth ratio significantly predict positive HIV serostatus: a 1 point increase in the cluster-level Gini coefficient and in the cluster-level wealth ratio is associated with a 2.35 and 1.3 times increased likelihood of being HIV positive, respectively, controlling for individual-level demographic predictors, and associations are stronger in models including only males. Adding sexual behavior variables attenuates the effects of both inequality measures. Reporting 11 plus lifetime sexual partners increases the odds of being HIV positive over five-fold. The likelihood of having more extramarital partners is significantly higher in clusters with greater wealth inequality measured by the wealth ratio. Disaggregating logit models by sex indicates important risk behavior differences. Household wealth inequality within DHS clusters predicts HIV serostatus, and the relationship is partially mediated by more extramarital partners. These results emphasize the importance of incorporating higher-level contextual factors, investigating behavioral mediators, and disaggregating by sex in assessing HIV risk in order to uncover potential mechanisms of action and points of preventive intervention PMID:24406021</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24470650','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24470650"><span>Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C</p> <p>2014-02-01</p> <p>To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27789074','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27789074"><span>Prediction of hypotension during spinal anesthesia for elective cesarean section by altered heart rate variability induced by postural change.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sakata, K; Yoshimura, N; Tanabe, K; Kito, K; Nagase, K; Iida, H</p> <p>2017-02-01</p> <p>Maternal hypotension is a common complication during cesarean section performed under spinal anesthesia. Changes in maternal heart rate with postural changes or values of heart rate variability have been reported to predict hypotension. Therefore, we hypothesized that changes in heart rate variability due to postural changes can predict hypotension. A total of 45 women scheduled to undergo cesarean section under spinal anesthesia were enrolled. A postural change test was performed the day before cesarean section. The ratio of the power of low and high frequency components contributing to heart rate variability was assessed in the order of supine, left lateral, and supine. Patients who exhibited a ⩾two-fold increase in the low-to-high frequency ratio when moving to supine from the lateral position were assigned to the postural change test-positive group. According to the findings of the postural change test, patients were assigned to the positive (n=22) and negative (n=23) groups, respectively. Hypotension occurred in 35/45 patients, of whom 21 (60%) were in the positive group and 14 (40%) were in the negative group. The incidence of hypotension was greater in the positive group (P<0.01). The total dose of ephedrine was greater in the positive group (15±11 vs. 7±7mg, P=0.005). The area under the receiver operating characteristic curve was 0.76 for the postural change test as a predictor of hypotension. The postural change test with heart rate variability analysis may be used to predict the risk of hypotension during spinal anesthesia for cesarean section. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5148242','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5148242"><span>A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Landy, Michael S.</p> <p>2016-01-01</p> <p>Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. SIGNIFICANCE STATEMENT Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. PMID:27807167</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27807167','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27807167"><span>A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Peng; Landy, Michael S</p> <p>2016-11-02</p> <p>Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. Copyright © 2016 the authors 0270-6474/16/3611259-16$15.00/0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5868335','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5868335"><span>Modeling the Environmental Suitability for Aedes (Stegomyia) aegypti and Aedes (Stegomyia) albopictus (Diptera: Culicidae) in the Contiguous United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Johnson, Tammi L.; Haque, Ubydul; Monaghan, Andrew J.; Eisen, Lars; Hahn, Micah B.; Hayden, Mary H.; Savage, Harry M.; McAllister, Janet; Mutebi, John-Paul; Eisen, Rebecca J.</p> <p>2018-01-01</p> <p>The mosquitoes Aedes (Stegomyia) aegypti (L.)(Diptera:Culicidae) and Ae. (Stegomyia) albopictus (Skuse) (Diptera:Culicidae) transmit dengue, chikungunya, and Zika viruses and represent a growing public health threat in parts of the United States where they are established. To complement existing mosquito presence records based on discontinuous, non-systematic surveillance efforts, we developed county-scale environmental suitability maps for both species using maximum entropy modeling to fit climatic variables to county presence records from 1960–2016 in the contiguous United States. The predictive models for Ae. aegypti and Ae. albopictus had an overall accuracy of 0.84 and 0.85, respectively. Cumulative growing degree days (GDDs) during the winter months, an indicator of overall warmth, was the most important predictive variable for both species and was positively associated with environmental suitability. The number (percentage) of counties classified as environmentally suitable, based on models with 90 or 99% sensitivity, ranged from 1,443 (46%) to 2,209 (71%) for Ae. aegypti and from 1,726 (55%) to 2,329 (75%) for Ae. albopictus. Increasing model sensitivity results in more counties classified as suitable, at least for summer survival, from which there are no mosquito records. We anticipate that Ae. aegypti and Ae. albopictus will be found more commonly in counties classified as suitable based on the lower 90% sensitivity threshold compared with the higher 99% threshold. Counties predicted suitable with 90% sensitivity should therefore be a top priority for expanded mosquito surveillance efforts while still keeping in mind that Ae. aegypti and Ae. albopictus may be introduced, via accidental transport of eggs or immatures, and potentially proliferate during the warmest part of the year anywhere within the geographic areas delineated by the 99% sensitivity model. PMID:29029153</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3599274','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3599274"><span>Cumulative lactate and hospital mortality in ICU patients</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Both hyperlactatemia and persistence of hyperlactatemia have been associated with bad outcome. We compared lactate and lactate-derived variables in outcome prediction. Methods Retrospective observational study. Case records from 2,251 consecutive intensive care unit (ICU) patients admitted between 2001 and 2007 were analyzed. Baseline characteristics, all lactate measurements, and in-hospital mortality were recorded. The time integral of arterial blood lactate levels above the upper normal threshold of 2.2 mmol/L (lactate-time-integral), maximum lactate (max-lactate), and time-to-first-normalization were calculated. Survivors and nonsurvivors were compared and receiver operating characteristic (ROC) analysis were applied. Results A total of 20,755 lactate measurements were analyzed. Data are srpehown as median [interquartile range]. In nonsurvivors (n = 405) lactate-time-integral (192 [0–1881] min·mmol/L) and time-to-first normalization (44.0 [0–427] min) were higher than in hospital survivors (n = 1846; 0 [0–134] min·mmol/L and 0 [0–75] min, respectively; all p < 0.001). Normalization of lactate <6 hours after ICU admission revealed better survival compared with normalization of lactate >6 hours (mortality 16.6% vs. 24.4%; p < 0.001). AUC of ROC curves to predict in-hospital mortality was the largest for max-lactate, whereas it was not different among all other lactate derived variables (all p > 0.05). The area under the ROC curves for admission lactate and lactate-time-integral was not different (p = 0.36). Conclusions Hyperlactatemia is associated with in-hospital mortality in a heterogeneous ICU population. In our patients, lactate peak values predicted in-hospital mortality equally well as lactate-time-integral of arterial blood lactate levels above the upper normal threshold. PMID:23446002</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSAH54A0091L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSAH54A0091L"><span>Reduced Salinity Improves Marine Food Availability With Positive Feedbacks on pH in a Tidally-Dominated Estuary</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lowe, A. T.; Roberts, E. A.; Galloway, A. W. E.</p> <p>2016-02-01</p> <p>Coastal regions around the world are changing rapidly, generating many physiological stressors for marine organisms. Food availability, a major factor determining physiological condition of marine organisms, in these systems reflects the influence of biological and environmental factors, and will likely respond dramatically to long-term changes. Using observations of phytoplankton, detritus, and their corresponding fatty acids and stable isotopes of carbon, nitrogen and sulfur, we identified environmental drivers of pelagic food availability and quality along a salinity gradient in a large tidally influenced estuary (San Juan Archipelago, Salish Sea, USA). Variation in chlorophyll a (Chl a), biomarkers and environmental conditions exhibited a similar range at both tidal and seasonal scales, highlighting a tide-related mechanism controlling productivity that is important to consider for long-term monitoring. Multiple parameters of food availability were inversely and non-linearly correlated to salinity, such that availability of high-quality (based on abundance, essential fatty acid concentration and C:N) seston increased below a salinity threshold of 30. The increased marine productivity was associated with increased pH and dissolved oxygen (DO) at lower salinity. Based on this observation we predicted that a decrease of salinity to below the threshold would result in higher Chl a, temperature, DO and pH across a range of temporal and spatial scales, and tested the prediction with a meta-analysis of available data. At all scales, these variables showed significant and consistent increases related to the salinity threshold. This finding provides important context to the increased frequency of below-threshold salinity over the last 71 years in this region, suggesting greater food availability with positive feedbacks on DO and pH. Together, these findings indicate that many of the environmental factors predicted to increase physiological stress to benthic suspension feeders (e.g. decreased salinity) may simultaneously and paradoxically improve conditions for benthic organisms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26871033','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26871033"><span>Connectivity percolation in suspensions of attractive square-well spherocylinders.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dixit, Mohit; Meyer, Hugues; Schilling, Tanja</p> <p>2016-01-01</p> <p>We have studied the connectivity percolation transition in suspensions of attractive square-well spherocylinders by means of Monte Carlo simulation and connectedness percolation theory. In the 1980s the percolation threshold of slender fibers has been predicted to scale as the fibers' inverse aspect ratio [Phys. Rev. B 30, 3933 (1984)PRBMDO1098-012110.1103/PhysRevB.30.3933]. The main finding of our study is that the attractive spherocylinder system reaches this inverse scaling regime at much lower aspect ratios than found in suspensions of hard spherocylinders. We explain this difference by showing that third virial corrections of the pair connectedness functions, which are responsible for the deviation from the scaling regime, are less important for attractive potentials than for hard particles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.2879C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.2879C"><span>Developing a dengue early warning system using time series model: Case study in Tainan, Taiwan</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Xiao-Wei; Jan, Chyan-Deng; Wang, Ji-Shang</p> <p>2017-04-01</p> <p>Dengue fever (DF) is a climate-sensitive disease that has been emerging in southern regions of Taiwan over the past few decades, causing a significant health burden to affected areas. This study aims to propose a predictive model to implement an early warning system so as to enhance dengue surveillance and control in Tainan, Taiwan. The Seasonal Autoregressive Integrated Moving Average (SARIMA) model was used herein to forecast dengue cases. Temporal correlation between dengue incidences and climate variables were examined by Pearson correlation analysis and Cross-correlation tests in order to identify key determinants to be included as predictors. The dengue surveillance data between 2000 and 2009, as well as their respective climate variables were then used as inputs for the model. We validated the model by forecasting the number of dengue cases expected to occur each week between January 1, 2010 and December 31, 2015. In addition, we analyzed historical dengue trends and found that 25 cases occurring in one week was a trigger point that often led to a dengue outbreak. This threshold point was combined with the season-based framework put forth by the World Health Organization to create a more accurate epidemic threshold for a Tainan-specific warning system. A Seasonal ARIMA model with the general form: (1,0,5)(1,1,1)52 is identified as the most appropriate model based on lowest AIC, and was proven significant in the prediction of observed dengue cases. Based on the correlation coefficient, Lag-11 maximum 1-hr rainfall (r=0.319, P<0.05) and Lag-11 minimum temperature (r=0.416, P<0.05) are found to be the most positively correlated climate variables. Comparing the four multivariate models(i.e.1, 4, 9 and 13 weeks ahead), we found that including the climate variables improves the prediction RMSE as high as 3.24%, 10.39%, 17.96%, 21.81% respectively, in contrast to univariate models. Furthermore, the ability of the four multivariate models to determine whether the epidemic threshold would be exceeded in any given week during the forecasting period of 2010-2015 was analyzed using a contingency table. The 4 weeks-ahead approach was the most appropriate for an operational public health response with a 78.7% hit rate and 0.7% false alarm rate. Our findings indicate that SARIMA model is an ideal model for detecting outbreaks as it has high sensitivity and low risk of false alarms. Accurately forecasting future trends will provide valuable time to activate dengue surveillance and control in Tainan, Taiwan. We conclude that this timely dengue early warning system will enable public health services to allocate limited resources more effectively, and public health officials to adjust dengue emergency response plans to their maximum capabilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24758681','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24758681"><span>Prostate cancer: role of pretreatment multiparametric 3-T MRI in predicting biochemical recurrence after radical prostatectomy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Park, Jung Jae; Kim, Chan Kyo; Park, Sung Yoon; Park, Byung Kwan; Lee, Hyun Moo; Cho, Seong Whi</p> <p>2014-05-01</p> <p>The purpose of this study is to retrospectively investigate whether pretreatment multiparametric MRI findings can predict biochemical recurrence in patients who underwent radical prostatectomy (RP) for localized prostate cancer. In this study, 282 patients with biopsy-proven prostate cancer who received RP underwent pretreatment MRI using a phased-array coil at 3 T, including T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced MRI (DCE-MRI). MRI variables included apparent tumor presence on combined imaging sequences, extracapsular extension, and tumor size on DWI or DCE-MRI. Clinical variables included baseline prostate-specific antigen (PSA) level, clinical stage, and Gleason score at biopsy. The relationship between clinical and imaging variables and biochemical recurrence was evaluated using Cox regression analysis. After a median follow-up of 26 months, biochemical recurrence developed in 61 patients (22%). Univariate analysis revealed that all the imaging and clinical variables were significantly associated with biochemical recurrence (p < 0.01). On multivariate analysis, however, baseline PSA level (p = 0.002), Gleason score at biopsy (p = 0.024), and apparent tumor presence on combined T2WI, DWI, and DCE-MRI (p = 0.047) were the only significant independent predictors of biochemical recurrence. Of the independent predictors, apparent tumor presence on combined T2WI, DWI, and DCE-MRI showed the highest hazard ratio (2.38) compared with baseline PSA level (hazard ratio, 1.05) and Gleason score at biopsy (hazard ratio, 1.34). The apparent tumor presence on combined T2WI, DWI, and DCE-MRI of pretreatment MRI is an independent predictor of biochemical recurrence after RP. This finding may be used to construct a predictive model for biochemical recurrence after surgery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4992847','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4992847"><span>Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong</p> <p>2016-01-01</p> <p>The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the ‘general separatrix’ in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability. PMID:27546614</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27546614','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27546614"><span>Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong</p> <p>2016-08-22</p> <p>The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the 'general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28531336','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28531336"><span>Loop Gain Predicts the Response to Upper Airway Surgery in Patients With Obstructive Sleep Apnea.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Joosten, Simon A; Leong, Paul; Landry, Shane A; Sands, Scott A; Terrill, Philip I; Mann, Dwayne; Turton, Anthony; Rangaswamy, Jhanavi; Andara, Christopher; Burgess, Glen; Mansfield, Darren; Hamilton, Garun S; Edwards, Bradley A</p> <p>2017-07-01</p> <p>Upper airway surgery is often recommended to treat patients with obstructive sleep apnea (OSA) who cannot tolerate continuous positive airways pressure. However, the response to surgery is variable, potentially because it does not improve the nonanatomical factors (ie, loop gain [LG] and arousal threshold) causing OSA. Measuring these traits clinically might predict responses to surgery. Our primary objective was to test the value of LG and arousal threshold to predict surgical success defined as 50% reduction in apnea-hypopnea index (AHI) and AHI <10 events/hour post surgery. We retrospectively analyzed data from patients who underwent upper airway surgery for OSA (n = 46). Clinical estimates of LG and arousal threshold were calculated from routine polysomnographic recordings presurgery and postsurgery (median of 124 [91-170] days follow-up). Surgery reduced both the AHI (39.1 ± 4.2 vs. 26.5 ± 3.6 events/hour; p < .005) and estimated arousal threshold (-14.8 [-22.9 to -10.2] vs. -9.4 [-14.5 to -6.0] cmH2O) but did not alter LG (0.45 ± 0.08 vs. 0.45 ± 0.12; p = .278). Responders to surgery had a lower baseline LG (0.38 ± 0.02 vs. 0.48 ± 0.01, p < .05) and were younger (31.0 [27.3-42.5] vs. 43.0 [33.0-55.3] years, p < .05) than nonresponders. Lower LG remained a significant predictor of surgical success after controlling for covariates (logistic regression p = .018; receiver operating characteristic area under curve = 0.80). Our study provides proof-of-principle that upper airway surgery most effectively resolves OSA in patients with lower LG. Predicting the failure of surgical treatment, consequent to less stable ventilatory control (elevated LG), can be achieved in the clinic and may facilitate avoidance of surgical failures. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96a3001F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96a3001F"><span>Contact of a spherical probe with a stretched rubber substrate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Frétigny, Christian; Chateauminois, Antoine</p> <p>2017-07-01</p> <p>We report on a theoretical and experimental investigation of the normal contact of stretched neo-Hookean substrates with rigid spherical probes. Starting from a published formulation of surface Green's function for incremental displacements on a prestretched, neo-Hookean, substrate [J. Mech. Phys. Solids 56, 2957 (2008), 10.1016/j.jmps.2008.07.002], a model is derived for both adhesive and nonadhesive contacts. The shape of the elliptical contact area together with the contact load and the contact stiffness are predicted as a function of the in-plane stretch ratios λx and λy of the substrate. The validity of this model is assessed by contact experiments carried out using an uniaxally stretched silicone rubber. For stretch ratio below about 1.25, a good agreement is observed between theory and experiments. Above this threshold, some deviations from the theoretical predictions are induced as a result of the departure of the mechanical response of the silicone rubber from the neo-Hokeean description embedded in the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5453264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5453264"><span>Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al2O3 Catalyst Using Response Surface Methodology (RSM)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed</p> <p>2014-01-01</p> <p>In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X1); the flow rate (X2); the catalyst weight (X3); the catalyst loading (X4) and the glycerol-water molar ratio (X5) on the H2 yield (Y1) and the conversion of glycerol to gaseous products (Y2) were explored. Using multiple regression analysis; the experimental results of the H2 yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H2 yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t-test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied. PMID:28788567</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2161972','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2161972"><span>Variability of argon laser-induced sensory and pain thresholds on human oral mucosa and skin.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Svensson, P.; Bjerring, P.; Arendt-Nielsen, L.; Kaaber, S.</p> <p>1991-01-01</p> <p>The variability of laser-induced pain perception on human oral mucosa and hairy skin was investigated in order to establish a new method for evaluation of pain in the orofacial region. A high-energy argon laser was used for experimental pain stimulation, and sensory and pain thresholds were determined. The intra-individual coefficients of variation for oral thresholds were comparable to cutaneous thresholds. However, inter-individual variation was smaller for oral thresholds, which could be due to larger variation in cutaneous optical properties. The short-term and 24-hr changes in thresholds on both surfaces were less than 9%. The results indicate that habituation to laser thresholds may account for part of the intra-individual variation observed. However, the subjective ratings of the intensity of the laser stimuli were constant. Thus, oral thresholds may, like cutaneous thresholds, be used for assessment and quantification of analgesic efficacies and to investigate various pain conditions. PMID:1814248</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4405076','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4405076"><span>Proposal for defining the relevance of drug accumulation derived from single dose study data for modified release dosage forms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Scheerans, Christian; Heinig, Roland; Mueck, Wolfgang</p> <p>2015-01-01</p> <p>Recently, the European Medicines Agency (EMA) published the new draft guideline on the pharmacokinetic and clinical evaluation of modified release (MR) formulations. The draft guideline contains the new requirement of performing multiple dose (MD) bioequivalence studies, in the case when the MR formulation is expected to show ‘relevant’ drug accumulation at steady state (SS). This new requirement reveals three fundamental issues, which are discussed in the current work: first, measurement for the extent of drug accumulation (MEDA) predicted from single dose (SD) study data; second, its relationship with the percentage residual area under the plasma concentration–time curve (AUC) outside the dosing interval (τ) after SD administration, %AUC(τ-∞)SD; and third, the rationale for a threshold of %AUC(τ-∞)SD that predicts ‘relevant’ drug accumulation at SS. This work revealed that the accumulation ratio RA,AUC, derived from the ratio of the time-averaged plasma concentrations during τ at SS and after SD administration, respectively, is the ‘preferred’ MEDA for MR formulations. A causal relationship was derived between %AUC(τ-∞)SD and RA,AUC, which is valid for any drug (product) that shows (dose- and time-) linear pharmacokinetics regardless of the shape of the plasma concentration–time curve. Considering AUC thresholds from other guidelines together with the causal relationship between %AUC(τ-∞)SD and RA,AUC indicates that values of %AUC(τ-∞)SD ≤ 20%, resulting in RA,AUC ≤ 1.25, can be considered as leading to non-relevant drug accumulation. Hence, the authors suggest that 20% for %AUC(τ-∞)SD is a reasonable threshold and selection criterion between SD or MD study designs for bioequivalence studies of new MR formulations. © 2014 The Authors Biopharmaceutics & Drug Disposition Published by John Wiley & Sons Ltd. PMID:25327367</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GIPol..13....7J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GIPol..13....7J"><span>A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jakubowski, Jacek</p> <p>2014-12-01</p> <p>The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.V23B2822F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.V23B2822F"><span>Uncertainty in Wildfire Behavior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Finney, M.; Cohen, J. D.</p> <p>2013-12-01</p> <p>The challenge of predicting or modeling fire behavior is well recognized by scientists and managers who attempt predictions of fire spread rate or growth. At the scale of the spreading fire, the uncertainty in winds, moisture, fuel structure, and fire location make accurate predictions difficult, and the non-linear response of fire spread to these conditions means that average behavior is poorly represented by average environmental parameters. Even more difficult are estimations of threshold behaviors (e.g. spread/no-spread, crown fire initiation, ember generation and spotting) because the fire responds as a step-function to small changes in one or more environmental variables, translating to dynamical feedbacks and unpredictability. Recent research shows that ignition of fuel particles, itself a threshold phenomenon, depends on flame contact which is absolutely not steady or uniform. Recent studies of flame structure in both spreading and stationary fires reveals that much of the non-steadiness of the flames as they contact fuel particles results from buoyant instabilities that produce quasi-periodic flame structures. With fuel particle ignition produced by time-varying heating and short-range flame contact, future improvements in fire behavior modeling will likely require statistical approaches to deal with the uncertainty at all scales, including the level of heat transfer, the fuel arrangement, and weather.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18083462','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18083462"><span>Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les</p> <p>2008-01-01</p> <p>To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22098372','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22098372"><span>NMR metabolomic analysis of dairy cows reveals milk glycerophosphocholine to phosphocholine ratio as prognostic biomarker for risk of ketosis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Klein, Matthias S; Buttchereit, Nina; Miemczyk, Sebastian P; Immervoll, Ann-Kathrin; Louis, Caridad; Wiedemann, Steffi; Junge, Wolfgang; Thaller, Georg; Oefner, Peter J; Gronwald, Wolfram</p> <p>2012-02-03</p> <p>Ketosis is a common metabolic disease in dairy cows. Diagnostic markers for ketosis such as acetone and beta-hydroxybutyric acid (BHBA) are known, but disease prediction remains an unsolved challenge. Milk is a steadily available biofluid and routinely collected on a daily basis. This high availability makes milk superior to blood or urine samples for diagnostic purposes. In this contribution, we show that high milk glycerophosphocholine (GPC) levels and high ratios of GPC to phosphocholine (PC) allow for the reliable selection of healthy and metabolically stable cows for breeding purposes. Throughout lactation, high GPC values are connected with a low ketosis incidence. During the first month of lactation, molar GPC/PC ratios equal or greater than 2.5 indicate a very low risk for developing ketosis. This threshold was validated for different breeds (Holstein-Friesian, Brown Swiss, and Simmental Fleckvieh) and for animals in different lactations, with observed odds ratios between 1.5 and 2.38. In contrast to acetone and BHBA, these measures are independent of the acute disease status. A possible explanation for the predictive effect is that GPC and PC are measures for the ability to break down phospholipids as a fatty acid source to meet the enhanced energy requirements of early lactation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70048348','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70048348"><span>Application of empirical predictive modeling using conventional and alternative fecal indicator bacteria in eastern North Carolina waters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gonzalez, Raul; Conn, Kathleen E.; Crosswell, Joey; Noble, Rachel</p> <p>2012-01-01</p> <p>Coastal and estuarine waters are the site of intense anthropogenic influence with concomitant use for recreation and seafood harvesting. Therefore, coastal and estuarine water quality has a direct impact on human health. In eastern North Carolina (NC) there are over 240 recreational and 1025 shellfish harvesting water quality monitoring sites that are regularly assessed. Because of the large number of sites, sampling frequency is often only on a weekly basis. This frequency, along with an 18–24 h incubation time for fecal indicator bacteria (FIB) enumeration via culture-based methods, reduces the efficiency of the public notification process. In states like NC where beach monitoring resources are limited but historical data are plentiful, predictive models may offer an improvement for monitoring and notification by providing real-time FIB estimates. In this study, water samples were collected during 12 dry (n = 88) and 13 wet (n = 66) weather events at up to 10 sites. Statistical predictive models for Escherichiacoli (EC), enterococci (ENT), and members of the Bacteroidales group were created and subsequently validated. Our results showed that models for EC and ENT (adjusted R2 were 0.61 and 0.64, respectively) incorporated a range of antecedent rainfall, climate, and environmental variables. The most important variables for EC and ENT models were 5-day antecedent rainfall, dissolved oxygen, and salinity. These models successfully predicted FIB levels over a wide range of conditions with a 3% (EC model) and 9% (ENT model) overall error rate for recreational threshold values and a 0% (EC model) overall error rate for shellfish threshold values. Though modeling of members of the Bacteroidales group had less predictive ability (adjusted R2 were 0.56 and 0.53 for fecal Bacteroides spp. and human Bacteroides spp., respectively), the modeling approach and testing provided information on Bacteroidales ecology. This is the first example of a set of successful statistical predictive models appropriate for assessment of both recreational and shellfish harvesting water quality in estuarine waters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28586096','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28586096"><span>Spinal cord multi-parametric magnetic resonance imaging for survival prediction in amyotrophic lateral sclerosis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Querin, G; El Mendili, M M; Lenglet, T; Delphine, S; Marchand-Pauvert, V; Benali, H; Pradat, P-F</p> <p>2017-08-01</p> <p>Assessing survival is a critical issue in patients with amyotrophic lateral sclerosis (ALS). Neuroimaging seems to be promising in the assessment of disease severity and several studies also suggest a strong relationship between spinal cord (SC) atrophy described by magnetic resonance imaging (MRI) and disease progression. The aim of the study was to determine the predictive added value of multimodal SC MRI on survival. Forty-nine ALS patients were recruited and clinical data were collected. Patients were scored on the Revised ALS Functional Rating Scale and manual muscle testing. They were followed longitudinally to assess survival. The cervical SC was imaged using the 3 T MRI system. Cord volume and cross-sectional area (CSA) at each vertebral level were computed. Diffusion tensor imaging metrics were measured. Imaging metrics and clinical variables were used as inputs for a multivariate Cox regression survival model. On building a multivariate Cox regression model with clinical and MRI parameters, fractional anisotropy, magnetization transfer ratio and CSA at C2-C3, C4-C5, C5-C6 and C6-C7 vertebral levels were significant. Moreover, the hazard ratio calculated for CSA at the C3-C4 and C5-C6 levels indicated an increased risk for patients with SC atrophy (respectively 0.66 and 0.68). In our cohort, MRI parameters seem to be more predictive than clinical variables, which had a hazard ratio very close to 1. It is suggested that multimodal SC MRI could be a useful tool in survival prediction especially if used at the beginning of the disease and when combined with clinical variables. To validate it as a biomarker, confirmation of the results in bigger independent cohorts of patients is warranted. © 2017 EAN.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28285952','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28285952"><span>Ovarian response to 150 µg corifollitropin alfa in a GnRH-antagonist multiple-dose protocol: a prospective cohort study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lerman, Tamara; Depenbusch, Marion; Schultze-Mosgau, Askan; von Otte, Soeren; Scheinhardt, Markus; Koenig, Inke; Kamischke, Axel; Macek, Milan; Schwennicke, Arne; Segerer, Sabine; Griesinger, Georg</p> <p>2017-05-01</p> <p>The incidence of low (<6 oocytes) and high (>18 oocytes) ovarian response to 150 µg corifollitropin alfa in relation to anti-Müllerian hormone (AMH) and other biomarkers was studied in a multi-centre (n = 5), multi-national, prospective, investigator-initiated, observational cohort study. Infertile women (n = 212), body weight >60 kg, underwent controlled ovarian stimulation in a gonadotrophin-releasing hormone-antagonist multiple-dose protocol. Demographic, sonographic and endocrine parameters were prospectively assessed on cycle day 2 or 3 of a spontaneous menstruation before the administration of 150 µg corifollitropin alfa. Serum AMH showed the best correlation with the number of oocytes obtained among all predictor variables. In receiver-operating characteristic analysis, AMH at a threshold of 0.91 ng/ml showed a sensitivity of 82.4%, specificity of 82.4%, positive predictive value 52.9%and negative predictive value 95.1% for predicting low response (area under the curve [AUC], 95% CI; P-value: 0.853, 0.769-0.936; <0.0001). For predicting high response, the optimal threshold for AMH was 2.58 ng/ml, relating to a sensitivity of 80.0%, specificity 82.1%, positive predictive value 42.5% and negative predictive value 96.1% (AUC, 95% CI; P-value: 0.871, 0.787-0.955; <0.0001). In conclusion, patients with serum AMH concentrations between approximately 0.9 and 2.6 ng/ml were unlikely to show extremes of response. Copyright © 2017. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27994285','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27994285"><span>Cost-effectiveness thresholds: pros and cons.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R</p> <p>2016-12-01</p> <p>Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29263562','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29263562"><span>Testing for a Debt-Threshold Effect on Output Growth.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki</p> <p>2017-12-01</p> <p>Using the Reinhart-Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post-war sample suggest that the debt threshold for economic growth may exist around a relatively small debt-to-GDP ratio of 30 per cent. Furthermore, countries with debt-to-GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5726385','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5726385"><span>Testing for a Debt‐Threshold Effect on Output Growth†</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki</p> <p>2017-01-01</p> <p>Abstract Using the Reinhart–Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post‐war sample suggest that the debt threshold for economic growth may exist around a relatively small debt‐to‐GDP ratio of 30 per cent. Furthermore, countries with debt‐to‐GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median. PMID:29263562</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28869746','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28869746"><span>Physiological intensity profile, exercise load and performance predictors of a 65-km mountain ultra-marathon.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fornasiero, Alessandro; Savoldelli, Aldo; Fruet, Damiano; Boccia, Gennaro; Pellegrini, Barbara; Schena, Federico</p> <p>2018-06-01</p> <p>The aims of the study were to describe the physiological profile of a 65-km (4000-m cumulative elevation gain) running mountain ultra-marathon (MUM) and to identify predictors of MUM performance. Twenty-three amateur trail-runners performed anthropometric evaluations and an uphill graded exercise test (GXT) for VO 2max, ventilatory thresholds (VTs), power outputs (PMax, PVTs) and heart rate response (HRmax, HR@VTs). Heart rate (HR) was monitored during the race and intensity was expressed as: Zone I (<VT1), Zone II (VT1-VT2), Zone III (>VT2) for exercise load calculation (training impulse, TRIMP). Mean race intensity was 77.1%±4.4% of HRmax distributed as: 85.7%±19.4% Zone I, 13.9%±18.6% Zone II, 0.4%±0.9% Zone III. Exercise load was 766±110 TRIMP units. Race time (11.8±1.6h) was negatively correlated with VO 2max (r = -0.66, P <0.001) and PMax (r = -0.73, P <0.001), resulting these variables determinant in predicting MUM performance, whereas exercise thresholds did not improve performance prediction. Laboratory variables explained only 59% of race time variance, underlining the multi-factorial character of MUM performance. Our results support the idea that VT1 represents a boundary of tolerable intensity in this kind of events, where exercise load is extremely high. This information can be helpful in identifying optimal pacing strategies to complete such extremely demanding MUMs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25042996','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25042996"><span>Graphical assessment of incremental value of novel markers in prediction models: From statistical to decision analytical perspectives.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Steyerberg, Ewout W; Vedder, Moniek M; Leening, Maarten J G; Postmus, Douwe; D'Agostino, Ralph B; Van Calster, Ben; Pencina, Michael J</p> <p>2015-07-01</p> <p>New markers may improve prediction of diagnostic and prognostic outcomes. We aimed to review options for graphical display and summary measures to assess the predictive value of markers over standard, readily available predictors. We illustrated various approaches using previously published data on 3264 participants from the Framingham Heart Study, where 183 developed coronary heart disease (10-year risk 5.6%). We considered performance measures for the incremental value of adding HDL cholesterol to a prediction model. An initial assessment may consider statistical significance (HR = 0.65, 95% confidence interval 0.53 to 0.80; likelihood ratio p < 0.001), and distributions of predicted risks (densities or box plots) with various summary measures. A range of decision thresholds is considered in predictiveness and receiver operating characteristic curves, where the area under the curve (AUC) increased from 0.762 to 0.774 by adding HDL. We can furthermore focus on reclassification of participants with and without an event in a reclassification graph, with the continuous net reclassification improvement (NRI) as a summary measure. When we focus on one particular decision threshold, the changes in sensitivity and specificity are central. We propose a net reclassification risk graph, which allows us to focus on the number of reclassified persons and their event rates. Summary measures include the binary AUC, the two-category NRI, and decision analytic variants such as the net benefit (NB). Various graphs and summary measures can be used to assess the incremental predictive value of a marker. Important insights for impact on decision making are provided by a simple graph for the net reclassification risk. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27076187','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27076187"><span>Biomarker kinetics in the prediction of VAP diagnosis: results from the BioVAP study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Póvoa, Pedro; Martin-Loeches, Ignacio; Ramirez, Paula; Bos, Lieuwe D; Esperatti, Mariano; Silvestre, Joana; Gili, Gisela; Goma, Gema; Berlanga, Eugenio; Espasa, Mateu; Gonçalves, Elsa; Torres, Antoni; Artigas, Antonio</p> <p>2016-12-01</p> <p>Prediction of diagnosis of ventilator-associated pneumonia (VAP) remains difficult. Our aim was to assess the value of biomarker kinetics in VAP prediction. We performed a prospective, multicenter, observational study to evaluate predictive accuracy of biomarker kinetics, namely C-reactive protein (CRP), procalcitonin (PCT), mid-region fragment of pro-adrenomedullin (MR-proADM), for VAP management in 211 patients receiving mechanical ventilation for >72 h. For the present analysis, we assessed all (N = 138) mechanically ventilated patients without an infection at admission. The kinetics of each variable, from day 1 to day 6 of mechanical ventilation, was assessed with each variable's slopes (rate of biomarker change per day), highest level and maximum amplitude of variation (Δ (max)). A total of 35 patients (25.4 %) developed a VAP and were compared with 70 non-infected controls (50.7 %). We excluded 33 patients (23.9 %) who developed a non-VAP nosocomial infection. Among the studied biomarkers, CRP and CRP ratio showed the best performance in VAP prediction. The slope of CRP change over time (adjusted odds ratio [aOR] 1.624, confidence interval [CI]95% [1.206, 2.189], p = 0.001), the highest CRP ratio concentration (aOR 1.202, CI95% [1.061, 1.363], p = 0.004) and Δ (max) CRP (aOR 1.139, CI95% [1.039, 1.248], p = 0.006), during the first 6 days of mechanical ventilation, were all significantly associated with VAP development. Both PCT and MR-proADM showed a poor predictive performance as well as temperature and white cell count. Our results suggest that in patients under mechanical ventilation, daily CRP monitoring was useful in VAP prediction. Trial registration NCT02078999.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25010220','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25010220"><span>The variance needed to accurately describe jump height from vertical ground reaction force data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran</p> <p>2014-12-01</p> <p>In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24885197','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24885197"><span>A robotic test of proprioception within the hemiparetic arm post-stroke.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simo, Lucia; Botzer, Lior; Ghez, Claude; Scheidt, Robert A</p> <p>2014-04-30</p> <p>Proprioception plays important roles in planning and control of limb posture and movement. The impact of proprioceptive deficits on motor function post-stroke has been difficult to elucidate due to limitations in current tests of arm proprioception. Common clinical tests only provide ordinal assessment of proprioceptive integrity (eg. intact, impaired or absent). We introduce a standardized, quantitative method for evaluating proprioception within the arm on a continuous, ratio scale. We demonstrate the approach, which is based on signal detection theory of sensory psychophysics, in two tasks used to characterize motor function after stroke. Hemiparetic stroke survivors and neurologically intact participants attempted to detect displacement- or force-perturbations robotically applied to their arm in a two-interval, two-alternative forced-choice test. A logistic psychometric function parameterized detection of limb perturbations. The shape of this function is determined by two parameters: one corresponds to a signal detection threshold and the other to variability of responses about that threshold. These two parameters define a space in which proprioceptive sensation post-stroke can be compared to that of neurologically-intact people. We used an auditory tone discrimination task to control for potential comprehension, attention and memory deficits. All but one stroke survivor demonstrated competence in performing two-alternative discrimination in the auditory training test. For the remaining stroke survivors, those with clinically identified proprioceptive deficits in the hemiparetic arm or hand had higher detection thresholds and exhibited greater response variability than individuals without proprioceptive deficits. We then identified a normative parameter space determined by the threshold and response variability data collected from neurologically intact participants. By plotting displacement detection performance within this normative space, stroke survivors with and without intact proprioception could be discriminated on a continuous scale that was sensitive to small performance variations, e.g. practice effects across days. The proposed method uses robotic perturbations similar to those used in ongoing studies of motor function post-stroke. The approach is sensitive to small changes in the proprioceptive detection of hand motions. We expect this new robotic assessment will empower future studies to characterize how proprioceptive deficits compromise limb posture and movement control in stroke survivors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoRL..41.8994A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoRL..41.8994A"><span>Phytoplankton plasticity drives large variability in carbon fixation efficiency</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ayata, Sakina-Dorothée.; Lévy, Marina; Aumont, Olivier; Resplandy, Laure; Tagliabue, Alessandro; Sciandra, Antoine; Bernard, Olivier</p> <p>2014-12-01</p> <p>Phytoplankton C:N stoichiometry is highly flexible due to physiological plasticity, which could lead to high variations in carbon fixation efficiency (carbon consumption relative to nitrogen). However, the magnitude, as well as the spatial and temporal scales of variability, remains poorly constrained. We used a high-resolution biogeochemical model resolving various scales from small to high, spatially and temporally, in order to quantify and better understand this variability. We find that phytoplankton C:N ratio is highly variable at all spatial and temporal scales (5-12 molC/molN), from mesoscale to regional scale, and is mainly driven by nitrogen supply. Carbon fixation efficiency varies accordingly at all scales (±30%), with higher values under oligotrophic conditions and lower values under eutrophic conditions. Hence, phytoplankton plasticity may act as a buffer by attenuating carbon sequestration variability. Our results have implications for in situ estimations of C:N ratios and for future predictions under high CO2 world.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1356498-fully-convolutional-neural-network-removing-background-noisy-images-uranium-bearing-particles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1356498-fully-convolutional-neural-network-removing-background-noisy-images-uranium-bearing-particles"><span>Fully convolutional neural network for removing background in noisy images of uranium bearing particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar</p> <p></p> <p>A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparisonmore » of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21801351','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21801351"><span>Metabolic and anthropometric parameters contribute to ART-mediated CD4+ T cell recovery in HIV-1-infected individuals: an observational study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azzoni, Livio; Foulkes, Andrea S; Firnhaber, Cynthia; Yin, Xiangfan; Crowther, Nigel J; Glencross, Deborah; Lawrie, Denise; Stevens, Wendy; Papasavvas, Emmanouil; Sanne, Ian; Montaner, Luis J</p> <p>2011-07-29</p> <p>The degree of immune reconstitution achieved in response to suppressive ART is associated with baseline individual characteristics, such as pre-treatment CD4 count, levels of viral replication, cellular activation, choice of treatment regimen and gender. However, the combined effect of these variables on long-term CD4 recovery remains elusive, and no single variable predicts treatment response. We sought to determine if adiposity and molecules associated with lipid metabolism may affect the response to ART and the degree of subsequent immune reconstitution, and to assess their ability to predict CD4 recovery. We studied a cohort of 69 (48 females and 21 males) HIV-infected, treatment-naïve South African subjects initiating antiretroviral treatment (d4T, 3Tc and lopinavir/ritonavir). We collected information at baseline and six months after viral suppression, assessing anthropometric parameters, dual energy X-ray absorptiometry and magnetic resonance imaging scans, serum-based clinical laboratory tests and whole blood-based flow cytometry, and determined their role in predicting the increase in CD4 count in response to ART. We present evidence that baseline CD4+ T cell count, viral load, CD8+ T cell activation (CD95 expression) and metabolic and anthropometric parameters linked to adiposity (LDL/HDL cholesterol ratio and waist/hip ratio) significantly contribute to variability in the extent of CD4 reconstitution (ΔCD4) after six months of continuous ART. Our final model accounts for 44% of the variability in CD4+ T cell recovery in virally suppressed individuals, representing a workable predictive model of immune reconstitution.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860015462','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860015462"><span>Statistical prediction of dynamic distortion of inlet flow using minimum dynamic measurement. An application to the Melick statistical method and inlet flow dynamic distortion prediction without RMS measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schweikhard, W. G.; Chen, Y. S.</p> <p>1986-01-01</p> <p>The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995JAP....77.4349C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995JAP....77.4349C"><span>Elastic properties of rigid fiber-reinforced composites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, J.; Thorpe, M. F.; Davis, L. C.</p> <p>1995-05-01</p> <p>We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.V43J..07R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.V43J..07R"><span>Intra-Shell boron isotope ratios in benthic foraminifera: Implications for paleo-pH reconstructions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rollion-Bard, C.; Erez, J.</p> <p>2009-12-01</p> <p>The boron isotope composition of marine carbonates is considered to be a seawater pH proxy. Nevertheless, the use of δ11B has some limitations: 1) the knowledge of fractionation factor (α4-3) between the two boron dissolved species (boric acid and borate ion), 2) the δ11B of seawater may have varied with time and 3) the amplitude of the "vital effects" of this proxy. Using secondary ion mass spectrometry (SIMS), we looked at the internal variability in the boron isotope ratio of the shallow water, symbionts bearing foraminiferan Amphistegina lobifera. Specimens were cultured at constant temperature (24±0.1 °C) in seawater with pH ranging between 7.90 and 8.45. We performed 6 to 8 measurements of δ11B in each foraminifera. Intra-shell boron isotopes show large variability with an upper threshold value of pH ~ 9. The ranges of the skeletal calculated pH values in different cultured foraminifera, show strong correlation with the culture pH values and may thus serve as proxy for pH in the past ocean.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25550366','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25550366"><span>Limited reliability of computed tomographic perfusion acute infarct volume measurements compared with diffusion-weighted imaging in anterior circulation stroke.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schaefer, Pamela W; Souza, Leticia; Kamalian, Shervin; Hirsch, Joshua A; Yoo, Albert J; Kamalian, Shahmir; Gonzalez, R Gilberto; Lev, Michael H</p> <p>2015-02-01</p> <p>Diffusion-weighted imaging (DWI) can reliably identify critically ischemic tissue shortly after stroke onset. We tested whether thresholded computed tomographic cerebral blood flow (CT-CBF) and CT-cerebral blood volume (CT-CBV) maps are sufficiently accurate to substitute for DWI for estimating the critically ischemic tissue volume. Ischemic volumes of 55 patients with acute anterior circulation stroke were assessed on DWI by visual segmentation and on CT-CBF and CT-CBV with segmentation using 15% and 30% thresholds, respectively. The contrast:noise ratios of ischemic regions on the DWI and CT perfusion (CTP) images were measured. Correlation and Bland-Altman analyses were used to assess the reliability of CTP. Mean contrast:noise ratios for DWI, CT-CBF, and CT-CBV were 4.3, 0.9, and 0.4, respectively. CTP and DWI lesion volumes were highly correlated (R(2)=0.87 for CT-CBF; R(2)=0.83 for CT-CBV; P<0.001). Bland-Altman analyses revealed little systemic bias (-2.6 mL) but high measurement variability (95% confidence interval, ±56.7 mL) between mean CT-CBF and DWI lesion volumes, and systemic bias (-26 mL) and high measurement variability (95% confidence interval, ±64.0 mL) between mean CT-CBV and DWI lesion volumes. A simulated treatment study demonstrated that using CTP-CBF instead of DWI for detecting a statistically significant effect would require at least twice as many patients. The poor contrast:noise ratios of CT-CBV and CT-CBF compared with those of DWI result in large measurement error, making it problematic to substitute CTP for DWI in selecting individual acute stroke patients for treatment. CTP could be used for treatment studies of patient groups, but the number of patients needed to identify a significant effect is much higher than the number needed if DWI is used. © 2014 American Heart Association, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.springer.com/environment/environmental+management/book/978-1-4899-8040-3','USGSPUBS'); return false;" href="http://www.springer.com/environment/environmental+management/book/978-1-4899-8040-3"><span>Thresholds for conservation and management: structured decision making as a conceptual framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.</p> <p>2014-01-01</p> <p>changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28351330','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28351330"><span>Stratified aspartate aminotransferase-to-platelet ratio index accurately predicts survival in hepatocellular carcinoma patients undergoing curative liver resection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Hao-Jie; Jiang, Jing-Hang; Yang, Yu-Ting; Guo, Zhe; Li, Ji-Jia; Liu, Xuan-Han; Lu, Fei; Zeng, Feng-Hua; Ye, Jin-Song; Zhang, Ke-Lan; Chen, Neng-Zhi; Xiang, Bang-De; Li, Le-Qun</p> <p>2017-03-01</p> <p>The aspartate aminotransferase-to-platelet ratio index has been reported to predict prognosis of patients with hepatocellular carcinoma. This study examined the prognostic potential of stratified aspartate aminotransferase-to-platelet ratio index for hepatocellular carcinoma patients undergoing curative liver resection. A total of 661 hepatocellular carcinoma patients were retrieved and the associations between aspartate aminotransferase-to-platelet ratio index and clinicopathological variables and survivals (overall survival and disease-free survival) were analyzed. Higher aspartate aminotransferase-to-platelet ratio index quartiles were significantly associated with poorer overall survival (p = 0.002) and disease-free survival (p = 0.001). Multivariate analysis showed aspartate aminotransferase-to-platelet ratio index to be an independent risk factor for overall survival (p = 0.018) and disease-free survival (p = 0.01). Patients in the highest aspartate aminotransferase-to-platelet ratio index quartile were at 44% greater risk of death than patients in the first quartile (hazard ratio = 1.445, 95% confidence interval = 1.081 - 1.931, p = 0.013), as well as 49% greater risk of recurrence (hazard ratio = 1.49, 95% confidence interval = 1.112-1.998, p = 0.008). Subgroup analysis also showed aspartate aminotransferase-to-platelet ratio index to be an independent predictor of poor overall survival and disease-free survival in patients positive for hepatitis B surface antigen or with cirrhosis (both p < 0.05). Similar results were obtained when aspartate aminotransferase-to-platelet ratio index was analyzed as a dichotomous variable with cutoff values of 0.25 and 0.62. Elevated preoperative aspartate aminotransferase-to-platelet ratio index may be independently associated with poor overall survival and disease-free survival in hepatocellular carcinoma patients following curative resection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22747658','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22747658"><span>An alternative method for quantifying coronary artery calcification: the multi-ethnic study of atherosclerosis (MESA).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, C Jason; Budoff, Matthew J; Kaufman, Joel D; Kronmal, Richard A; Brown, Elizabeth R</p> <p>2012-07-02</p> <p>Extent of atherosclerosis measured by amount of coronary artery calcium (CAC) in computed tomography (CT) has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS). These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS) for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA). Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26) and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179). Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224). The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4056349','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4056349"><span>The urine output definition of acute kidney injury is too liberal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Introduction The urine output criterion of 0.5 ml/kg/hour for 6 hours for acute kidney injury (AKI) has not been prospectively validated. Urine output criteria for AKI (AKIUO) as predictors of in-hospital mortality or dialysis need were compared. Methods All admissions to a general ICU were prospectively screened for 12 months and hourly urine output analysed in collection intervals between 1 and 12 hours. Prediction of the composite of mortality or dialysis by urine output was analysed in increments of 0.1 ml/kg/hour from 0.1 to 1 ml/kg/hour and the optimal threshold for each collection interval determined. AKICr was defined as an increase in plasma creatinine ≥26.5 μmol/l within 48 hours or ≥50% from baseline. Results Of 725 admissions, 72% had either AKICr or AKIUO or both. AKIUO (33.7%) alone was more frequent than AKICr (11.0%) alone (P <0.0001). A 6-hour urine output collection threshold of 0.3 ml/kg/hour was associated with a stepped increase in in-hospital mortality or dialysis (from 10% above to 30% less than 0.3 ml/kg/hour). Hazard ratios for in-hospital mortality and 1-year mortality were 2.25 (1.40 to 3.61) and 2.15 (1.47 to 3.15) respectively after adjustment for age, body weight, severity of illness, fluid balance, and vasopressor use. In contrast, after adjustment AKIUO was not associated with in-hospital mortality or 1-year mortality. The optimal urine output threshold was linearly related to duration of urine collection (r2 = 0.93). Conclusions A 6-hour urine output threshold of 0.3 ml/kg/hour best associated with mortality and dialysis, and was independently predictive of both hospital mortality and 1-year mortality. This suggests that the current AKI urine output definition is too liberally defined. Shorter urine collection intervals may be used to define AKI using lower urine output thresholds. PMID:23787055</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5098798','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5098798"><span>Spectrotemporal Modulation Sensitivity as a Predictor of Speech-Reception Performance in Noise With Hearing Aids</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Danielsson, Henrik; Hällgren, Mathias; Stenfelt, Stefan; Rönnberg, Jerker; Lunner, Thomas</p> <p>2016-01-01</p> <p>The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function—spectrotemporal modulation (STM) sensitivity—and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2–6 kHz) pure-tone average (HFA; R2 = .31) and STM sensitivity (R2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise. PMID:27815546</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29230973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29230973"><span>Sensitivity, specificity and predictive probability values of serum agglutination test titres for the diagnosis of Salmonella Dublin culture-positive bovine abortion and stillbirth.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sánchez-Miguel, C; Crilly, J; Grant, J; Mee, J F</p> <p>2018-06-01</p> <p>The objective of this study was to determine the diagnostic value of maternal serology for the diagnosis of Salmonella Dublin bovine abortion and stillbirth. A retrospective, unmatched, case-control study was carried out using twenty year's data (1989-2009) from bovine foetal submissions to an Irish government veterinary laboratory. Cases (n = 214) were defined as submissions with a S. Dublin culture-positive foetus from a S. Dublin unvaccinated dam where results of maternal S. Dublin serology were available. Controls (n = 415) were defined as submissions where an alternative diagnosis other than S. Dublin was made in a foetus from an S. Dublin unvaccinated dam where the results of maternal S. Dublin serology were available. A logistic regression model was fitted to the data: the dichotomous dependent variable was the S. Dublin foetal culture result, and the independent variables were the maternal serum agglutination test (SAT) titre results. Salmonella serology correctly classified 87% of S. Dublin culture-positive foetuses at a predicted probability threshold of 0.44 (cut-off at which sensitivity and specificity are at a maximum, J = 0.67). The sensitivity of the SAT at the same threshold was 73.8% (95% CI: 67.4%-79.5%), and the specificity was 93.2% (95% CI: 90.3%-95.4%). The positive and negative predictive values were 84.9% (95% CI: 79.3%-88.6%) and 87.3% (95% CI: 83.5%-91.3%), respectively. This study illustrates that the use of predicted probability values, rather than the traditional arbitrary breakpoints of negative, inconclusive and positive, increases the diagnostic value of the maternal SAT. Veterinary laboratory diagnosticians and veterinary practitioners can recover from the test results, information previously categorized, particularly from those results declared to be inconclusive. © 2017 Blackwell Verlag GmbH.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2815647','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2815647"><span>Optimal Threshold for a Positive Hybrid Capture 2 Test for Detection of Human Papillomavirus: Data from the ARTISTIC Trial▿</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sargent, A.; Bailey, A.; Turner, A.; Almonte, M.; Gilham, C.; Baysson, H.; Peto, J.; Roberts, C.; Thomson, C.; Desai, M.; Mather, J.; Kitchener, H.</p> <p>2010-01-01</p> <p>We present data on the use of the Hybrid Capture 2 (HC2) test for the detection of high-risk human papillomavirus (HR HPV) with different thresholds for positivity within a primary screening setting and as a method of triage for low-grade cytology. In the ARTISTIC population-based trial, 18,386 women were screened by cytology and for HPV. Cervical intraepithelial neoplasia lesions of grade two and higher (CIN2+ lesions) were identified for 453 women within 30 months of an abnormal baseline sample. When a relative light unit/cutoff (RLU/Co) ratio of ≥1 was used as the threshold for considering an HC2 result positive, 15.6% of results were positive, and the proportion of CIN2+ lesions in this group was 14.7%. The relative sensitivity for CIN2+ lesion detection was 93.4%. When an RLU/Co ratio of ≥2 was used as the threshold, there was a 2.5% reduction in positivity, with an increase in the proportion of CIN2+ lesions detected. The relative sensitivity decreased slightly, to 90.3%. Among women with low-grade cytology, HPV prevalences were 43.7% and 40.3% at RLU/Co ratios of ≥1 and ≥2, respectively. The proportions of CIN2+ lesions detected were 17.3% and 18.0%, with relative sensitivities of 87.7% at an RLU/Co ratio of ≥1 and 84.2% at an RLU/Co ratio of ≥2. At an RLU/Co ratio of ≥1, 68.3% of HC2-positive results were confirmed by the Roche line blot assay, compared to 77.2% of those at an RLU/Co ratio of ≥2. Fewer HC2-positive results were confirmed for 35- to 64-year-olds (50.3% at an RLU/Co ratio of ≥1 and 63.2% at an RLU/Co ratio of >2) than for 20- to 34-year-olds (78.7% at an RLU/Co ratio of ≥1 and 83.7% at an RLU/Co ratio of >2). If the HC2 test is used for routine screening as an initial test or as a method of triage for low-grade cytology, we would suggest increasing the threshold for positivity from the RLU/Co ratio of ≥1, recommended by the manufacturer, to an RLU/Co ratio of ≥2, since this study has shown that a beneficial balance between relative sensitivity and the proportion of CIN2+ lesions detected is achieved at this threshold. PMID:20007387</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28704461','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28704461"><span>Assessing the performance of remotely-sensed flooding indicators and their potential contribution to early warning for leptospirosis in Cambodia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ledien, Julia; Sorn, Sopheak; Hem, Sopheak; Huy, Rekol; Buchy, Philippe; Tarantola, Arnaud; Cappelle, Julien</p> <p>2017-01-01</p> <p>Remote sensing can contribute to early warning for diseases with environmental drivers, such as flooding for leptospirosis. In this study we assessed whether and which remotely-sensed flooding indicator could be used in Cambodia to study any disease for which flooding has already been identified as an important driver, using leptospirosis as a case study. The performance of six potential flooding indicators was assessed by ground truthing. The Modified Normalized Difference Water Index (MNDWI) was used to estimate the Risk Ratio (RR) of being infected by leptospirosis when exposed to floods it detected, in particular during the rainy season. Chi-square tests were also calculated. Another variable-the time elapsed since the first flooding of the year-was created using MNDWI values and was also included as explanatory variable in a generalized linear model (GLM) and in a boosted regression tree model (BRT) of leptospirosis infections, along with other explanatory variables. Interestingly, MNDWI thresholds for both detecting water and predicting the risk of leptospirosis seroconversion were independently evaluated at -0.3. Value of MNDWI greater than -0.3 was significantly related to leptospirosis infection (RR = 1.61 [1.10-1.52]; χ2 = 5.64, p-value = 0.02, especially during the rainy season (RR = 2.03 [1.25-3.28]; χ2 = 8.15, p-value = 0.004). Time since the first flooding of the year was a significant risk factor in our GLM model (p-value = 0.042). These results suggest that MNDWI may be useful as a risk indicator in an early warning remote sensing tool for flood-driven diseases like leptospirosis in South East Asia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24534856','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24534856"><span>Myocardial injury after noncardiac surgery: a large, international, prospective cohort study establishing diagnostic criteria, characteristics, predictors, and 30-day outcomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Botto, Fernando; Alonso-Coello, Pablo; Chan, Matthew T V; Villar, Juan Carlos; Xavier, Denis; Srinathan, Sadeesh; Guyatt, Gordon; Cruz, Patricia; Graham, Michelle; Wang, C Y; Berwanger, Otavio; Pearse, Rupert M; Biccard, Bruce M; Abraham, Valsa; Malaga, German; Hillis, Graham S; Rodseth, Reitze N; Cook, Deborah; Polanczyk, Carisi A; Szczeklik, Wojciech; Sessler, Daniel I; Sheth, Tej; Ackland, Gareth L; Leuwer, Martin; Garg, Amit X; Lemanach, Yannick; Pettit, Shirley; Heels-Ansdell, Diane; Luratibuse, Giovanna; Walsh, Michael; Sapsford, Robert; Schünemann, Holger J; Kurz, Andrea; Thomas, Sabu; Mrkobrada, Marko; Thabane, Lehana; Gerstein, Hertzel; Paniagua, Pilar; Nagele, Peter; Raina, Parminder; Yusuf, Salim; Devereaux, P J; Devereaux, P J; Sessler, Daniel I; Walsh, Michael; Guyatt, Gordon; McQueen, Matthew J; Bhandari, Mohit; Cook, Deborah; Bosch, Jackie; Buckley, Norman; Yusuf, Salim; Chow, Clara K; Hillis, Graham S; Halliwell, Richard; Li, Stephen; Lee, Vincent W; Mooney, John; Polanczyk, Carisi A; Furtado, Mariana V; Berwanger, Otavio; Suzumura, Erica; Santucci, Eliana; Leite, Katia; Santo, Jose Amalth do Espirirto; Jardim, Cesar A P; Cavalcanti, Alexandre Biasi; Guimaraes, Helio Penna; Jacka, Michael J; Graham, Michelle; McAlister, Finlay; McMurtry, Sean; Townsend, Derek; Pannu, Neesh; Bagshaw, Sean; Bessissow, Amal; Bhandari, Mohit; Duceppe, Emmanuelle; Eikelboom, John; Ganame, Javier; Hankinson, James; Hill, Stephen; Jolly, Sanjit; Lamy, Andre; Ling, Elizabeth; Magloire, Patrick; Pare, Guillaume; Reddy, Deven; Szalay, David; Tittley, Jacques; Weitz, Jeff; Whitlock, Richard; Darvish-Kazim, Saeed; Debeer, Justin; Kavsak, Peter; Kearon, Clive; Mizera, Richard; O'Donnell, Martin; McQueen, Matthew; Pinthus, Jehonathan; Ribas, Sebastian; Simunovic, Marko; Tandon, Vikas; Vanhelder, Tomas; Winemaker, Mitchell; Gerstein, Hertzel; McDonald, Sarah; O'Bryne, Paul; Patel, Ameen; Paul, James; Punthakee, Zubin; Raymer, Karen; Salehian, Omid; Spencer, Fred; Walter, Stephen; Worster, Andrew; Adili, Anthony; Clase, Catherine; Cook, Deborah; Crowther, Mark; Douketis, James; Gangji, Azim; Jackson, Paul; Lim, Wendy; Lovrics, Peter; Mazzadi, Sergio; Orovan, William; Rudkowski, Jill; Soth, Mark; Tiboni, Maria; Acedillo, Rey; Garg, Amit; Hildebrand, Ainslie; Lam, Ngan; Macneil, Danielle; Mrkobrada, Marko; Roshanov, Pavel S; Srinathan, Sadeesh K; Ramsey, Clare; John, Philip St; Thorlacius, Laurel; Siddiqui, Faisal S; Grocott, Hilary P; McKay, Andrew; Lee, Trevor W R; Amadeo, Ryan; Funk, Duane; McDonald, Heather; Zacharias, James; Villar, Juan Carlos; Cortés, Olga Lucía; Chaparro, Maria Stella; Vásquez, Skarlett; Castañeda, Alvaro; Ferreira, Silvia; Coriat, Pierre; Monneret, Denis; Goarin, Jean Pierre; Esteve, Cristina Ibanez; Royer, Catherine; Daas, Georges; Chan, Matthew T V; Choi, Gordon Y S; Gin, Tony; Lit, Lydia C W; Xavier, Denis; Sigamani, Alben; Faruqui, Atiya; Dhanpal, Radhika; Almeida, Smitha; Cherian, Joseph; Furruqh, Sultana; Abraham, Valsa; Afzal, Lalita; George, Preetha; Mala, Shaveta; Schünemann, Holger; Muti, Paola; Vizza, Enrico; Wang, C Y; Ong, G S Y; Mansor, Marzida; Tan, Alvin S B; Shariffuddin, Ina I; Vasanthan, V; Hashim, N H M; Undok, A Wahab; Ki, Ushananthini; Lai, Hou Yee; Ahmad, Wan Azman; Razack, Azad H A; Malaga, German; Valderrama-Victoria, Vanessa; Loza-Herrera, Javier D; De Los Angeles Lazo, Maria; Rotta-Rotta, Aida; Szczeklik, Wojciech; Sokolowska, Barbara; Musial, Jacek; Gorka, Jacek; Iwaszczuk, Pawel; Kozka, Mateusz; Chwala, Maciej; Raczek, Marcin; Mrowiecki, Tomasz; Kaczmarek, Bogusz; Biccard, Bruce; Cassimjee, Hussein; Gopalan, Dean; Kisten, Theroshnie; Mugabi, Aine; Naidoo, Prebashini; Naidoo, Rubeshan; Rodseth, Reitze; Skinner, David; Torborg, Alex; Paniagua, Pilar; Urrutia, Gerard; Maestre, Mari Luz; Santaló, Miquel; Gonzalez, Raúl; Font, Adrià; Martínez, Cecilia; Pelaez, Xavier; De Antonio, Marta; Villamor, Jose Marcial; García, Jesús Alvarez; Ferré, Maria José; Popova, Ekaterina; Alonso-Coello, Pablo; Garutti, Ignacio; Cruz, Patricia; Fernández, Carmen; Palencia, Maria; Díaz, Susana; Del Castillo, Teresa; Varela, Alberto; de Miguel, Angeles; Muñoz, Manuel; Piñeiro, Patricia; Cusati, Gabriel; Del Barrio, Maria; Membrillo, Maria José; Orozco, David; Reyes, Fidel; Sapsford, Robert J; Barth, Julian; Scott, Julian; Hall, Alistair; Howell, Simon; Lobley, Michaela; Woods, Janet; Howard, Susannah; Fletcher, Joanne; Dewhirst, Nikki; Williams, C; Rushton, A; Welters, I; Leuwer, M; Pearse, Rupert; Ackland, Gareth; Khan, Ahsun; Niebrzegowska, Edyta; Benton, Sally; Wragg, Andrew; Archbold, Andrew; Smith, Amanda; McAlees, Eleanor; Ramballi, Cheryl; Macdonald, Neil; Januszewska, Marta; Stephens, Robert; Reyes, Anna; Paredes, Laura Gallego; Sultan, Pervez; Cain, David; Whittle, John; Del Arroyo, Ana Gutierrez; Sessler, Daniel I; Kurz, Andrea; Sun, Zhuo; Finnegan, Patrick S; Egan, Cameron; Honar, Hooman; Shahinyan, Aram; Panjasawatwong, Krit; Fu, Alexander Y; Wang, Sihe; Reineks, Edmunds; Nagele, Peter; Blood, Jane; Kalin, Megan; Gibson, David; Wildes, Troy</p> <p>2014-03-01</p> <p>Myocardial injury after noncardiac surgery (MINS) was defined as prognostically relevant myocardial injury due to ischemia that occurs during or within 30 days after noncardiac surgery. The study's four objectives were to determine the diagnostic criteria, characteristics, predictors, and 30-day outcomes of MINS. In this international, prospective cohort study of 15,065 patients aged 45 yr or older who underwent in-patient noncardiac surgery, troponin T was measured during the first 3 postoperative days. Patients with a troponin T level of 0.04 ng/ml or greater (elevated "abnormal" laboratory threshold) were assessed for ischemic features (i.e., ischemic symptoms and electrocardiography findings). Patients adjudicated as having a nonischemic troponin elevation (e.g., sepsis) were excluded. To establish diagnostic criteria for MINS, the authors used Cox regression analyses in which the dependent variable was 30-day mortality (260 deaths) and independent variables included preoperative variables, perioperative complications, and potential MINS diagnostic criteria. An elevated troponin after noncardiac surgery, irrespective of the presence of an ischemic feature, independently predicted 30-day mortality. Therefore, the authors' diagnostic criterion for MINS was a peak troponin T level of 0.03 ng/ml or greater judged due to myocardial ischemia. MINS was an independent predictor of 30-day mortality (adjusted hazard ratio, 3.87; 95% CI, 2.96-5.08) and had the highest population-attributable risk (34.0%, 95% CI, 26.6-41.5) of the perioperative complications. Twelve hundred patients (8.0%) suffered MINS, and 58.2% of these patients would not have fulfilled the universal definition of myocardial infarction. Only 15.8% of patients with MINS experienced an ischemic symptom. Among adults undergoing noncardiac surgery, MINS is common and associated with substantial mortality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70000436','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70000436"><span>Linking runoff response to burn severity after a wildfire</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Moody, J.A.; Martin, D.A.; Haire, S.L.; Kinner, D.A.</p> <p>2008-01-01</p> <p>Extreme floods often follow wildfire in mountainous watersheds. However, a quantitative relation between the runoff response and burn severity at the watershed scale has not been established. Runoff response was measured as the runoff coefficient C, which is equal to the peak discharge per unit drainage area divided by the average maximum 30 min rainfall intensity during each rain storm. The magnitude of the bum severity was expressed as the change in the normalized burn ratio. A new burn severity variable, hydraulic functional connectivity ?? was developed and incorporates both the magnitude of the burn severity and the spatial sequence of the bum severity along hillslope flow paths. The runoff response and the burn severity were measured in seven subwatersheds (0.24 to 0.85 km2) in the upper part of Rendija Canyon burned by the 2000 Cerro Grande Fire Dear Los Alamos, New Mexico, USA. A rainfall-discharge relation was determined for four of the subwatersheds with nearly the same bum severity. The peak discharge per unit drainage area Qupeak was a linear function of the maximum 30 min rainfall intensity I30. This function predicted a rainfall intensity threshold of 8.5 mm h-1 below which no runoff was generated. The runoff coefficient C = Qupeak/I30 was a linear function of the mean hydraulic functional connectivity of the subwatersheds. Moreover, the variability of the mean hydraulic functional connectivity was related to the variability of the mean runoff coefficient, and this relation provides physical insight into why the runoff response from the same subwatershed can vary for different rainstorms with the same rainfall intensity. Published in 2007 by John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23892715','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23892715"><span>Probability-based nitrate contamination map of groundwater in Kinmen.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin</p> <p>2013-12-01</p> <p>Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25785866','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25785866"><span>Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Holland, E Penelope; James, Alex; Ruscoe, Wendy A; Pech, Roger P; Byrom, Andrea E</p> <p>2015-01-01</p> <p>Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts) are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT) for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus) outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23229885','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23229885"><span>N0/N1, PNL, or LNR? The effect of lymph node number on accurate survival prediction in pancreatic ductal adenocarcinoma.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Valsangkar, Nakul P; Bush, Devon M; Michaelson, James S; Ferrone, Cristina R; Wargo, Jennifer A; Lillemoe, Keith D; Fernández-del Castillo, Carlos; Warshaw, Andrew L; Thayer, Sarah P</p> <p>2013-02-01</p> <p>We evaluated the prognostic accuracy of LN variables (N0/N1), numbers of positive lymph nodes (PLN), and lymph node ratio (LNR) in the context of the total number of examined lymph nodes (ELN). Patients from SEER and a single institution (MGH) were reviewed and survival analyses performed in subgroups based on numbers of ELN to calculate excess risk of death (hazard ratio, HR). In SEER and MGH, higher numbers of ELN improved the overall survival for N0 patients. The prognostic significance (N0/N1) and PLN were too variable as the importance of a single PLN depended on the total number of LN dissected. LNR consistently correlated with survival once a certain number of lymph nodes were dissected (≥13 in SEER and ≥17 in the MGH dataset). Better survival for N0 patients with increasing ELN likely represents improved staging. PLN have some predictive value but the ELN strongly influence their impact on survival, suggesting the need for a ratio-based classification. LNR strongly correlates with outcome provided that a certain number of lymph nodes is evaluated, suggesting that the prognostic accuracy of any LN variable depends on the total number of ELN.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28457950','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28457950"><span>Practical determination of aortic valve calcium volume score on contrast-enhanced computed tomography prior to transcatheter aortic valve replacement and impact on paravalvular regurgitation: Elucidating optimal threshold cutoffs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bettinger, Nicolas; Khalique, Omar K; Krepp, Joseph M; Hamid, Nadira B; Bae, David J; Pulerwitz, Todd C; Liao, Ming; Hahn, Rebecca T; Vahl, Torsten P; Nazif, Tamim M; George, Isaac; Leon, Martin B; Einstein, Andrew J; Kodali, Susheel K</p> <p></p> <p>The threshold for the optimal computed tomography (CT) number in Hounsfield Units (HU) to quantify aortic valvular calcium on contrast-enhanced scans has not been standardized. Our aim was to find the most accurate threshold to predict paravalvular regurgitation (PVR) after transcatheter aortic valve replacement (TAVR). 104 patients who underwent TAVR with the CoreValve prosthesis were studied retrospectively. Luminal attenuation (LA) in HU was measured at the level of the aortic annulus. Calcium volume score for the aortic valvular complex was measured using 6 threshold cutoffs (650 HU, 850 HU, LA × 1.25, LA × 1.5, LA+50, LA+100). Receiver-operating characteristic (ROC) analysis was performed to assess the predictive value for > mild PVR (n = 16). Multivariable analysis was performed to determine the accuracy to predict > mild PVR after adjustment for depth and perimeter oversizing. ROC analysis showed lower area under the curve (AUC) values for fixed threshold cutoffs (650 or 850 HU) compared to thresholds relative to LA. The LA+100 threshold had the highest AUC (0.81), and AUC was higher than all studied protocols, other than the LA x 1.25 and LA + 50 protocols, where the difference approached statistical significance (p = 0.05, and 0.068, respectively). Multivariable analysis showed calcium volume determined by the LAx1.25, LAx1.5, LA+50, and LA+ 100 HU protocols to independently predict PVR. Calcium volume scoring thresholds which are relative to LA are more predictive of PVR post-TAVR than those which use fixed cutoffs. A threshold of LA+100 HU had the highest predictive value. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16371633','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16371633"><span>Disciplinary action by medical boards and prior behavior in medical school.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Papadakis, Maxine A; Teherani, Arianne; Banach, Mary A; Knettler, Timothy R; Rattner, Susan L; Stern, David T; Veloski, J Jon; Hodgson, Carol S</p> <p>2005-12-22</p> <p>Evidence supporting professionalism as a critical measure of competence in medical education is limited. In this case-control study, we investigated the association of disciplinary action against practicing physicians with prior unprofessional behavior in medical school. We also examined the specific types of behavior that are most predictive of disciplinary action against practicing physicians with unprofessional behavior in medical school. The study included 235 graduates of three medical schools who were disciplined by one of 40 state medical boards between 1990 and 2003 (case physicians). The 469 control physicians were matched with the case physicians according to medical school and graduation year. Predictor variables from medical school included the presence or absence of narratives describing unprofessional behavior, grades, standardized-test scores, and demographic characteristics. Narratives were assigned an overall rating for unprofessional behavior. Those that met the threshold for unprofessional behavior were further classified among eight types of behavior and assigned a severity rating (moderate to severe). Disciplinary action by a medical board was strongly associated with prior unprofessional behavior in medical school (odds ratio, 3.0; 95 percent confidence interval, 1.9 to 4.8), for a population attributable risk of disciplinary action of 26 percent. The types of unprofessional behavior most strongly linked with disciplinary action were severe irresponsibility (odds ratio, 8.5; 95 percent confidence interval, 1.8 to 40.1) and severely diminished capacity for self-improvement (odds ratio, 3.1; 95 percent confidence interval, 1.2 to 8.2). Disciplinary action by a medical board was also associated with low scores on the Medical College Admission Test and poor grades in the first two years of medical school (1 percent and 7 percent population attributable risk, respectively), but the association with these variables was less strong than that with unprofessional behavior. In this case-control study, disciplinary action among practicing physicians by medical boards was strongly associated with unprofessional behavior in medical school. Students with the strongest association were those who were described as irresponsible or as having diminished ability to improve their behavior. Professionalism should have a central role in medical academics and throughout one's medical career. Copyright 2005 Massachusetts Medical Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....53.6612L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....53.6612L"><span>Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.</p> <p>2017-08-01</p> <p>A high-resolution gridded daily precipitation data set was combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds which lead to landsliding in Switzerland. We colocated triggering rainfall to landslides, developed distributions of triggering and nontriggering rainfall event properties, and determined rainfall thresholds and intensity-duration ID curves and validated their performance. The best predictive performance was obtained by the intensity-duration ID threshold curve, followed by peak daily intensity Imax and mean event intensity Imean. Event duration by itself had very low predictive power. A single country-wide threshold of Imax = 28 mm/d was extended into space by regionalization based on surface erodibility and local climate (mean daily precipitation). It was found that wetter local climate and lower erodibility led to significantly higher rainfall thresholds required to trigger landslides. However, we showed that the improvement in model performance due to regionalization was marginal and much lower than what can be achieved by having a high-quality landslide database. Reference cases in which the landslide locations and timing were randomized and the landslide sample size was reduced showed the sensitivity of the Imax rainfall threshold model. Jack-knife and cross-validation experiments demonstrated that the model was robust. The results reported here highlight the potential of using rainfall ID threshold curves and rainfall threshold values for predicting the occurrence of landslides on a country or regional scale with possible applications in landslide warning systems, even with daily data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/bul/b2214/+','USGSPUBS'); return false;" href="https://pubs.usgs.gov/bul/b2214/+"><span>Elastic properties of overpressured and unconsolidated sediments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lee, Myung W.</p> <p>2003-01-01</p> <p>Differential pressure affects elastic velocities and Poisson?s ratio of sediments in such a way that velocities increase as differential pressure increases. Overpressured zones in sediments can be detected by observing an increase in Poisson?s ratio with a corresponding drop in elastic velocities. In highly overpressured sands, such as shallow water flow sands, the P-to S-wave velocity ratio (Vp/Vs) is very high, on the order of 10 or higher, due to the unconsolidated and uncemented nature of sediments. In order to predict elastic characteristics of highly overpressured sands, Biot-Gassmann theory by Lee (BGTL) is used with a variable exponent n that depends on differential pressure and the degree of consolidation/compaction. The exponent n decreases as differential pressure and the degree of consolidation increases, and, as n decreases, velocity increases and Vp/Vs decreases. The predicted velocity ratio by BGTL agrees well with the measured velocity ratio at low differential pressure for unconsolidated sediments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27064104','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27064104"><span>Identification of Subgroups of Women with Carpal Tunnel Syndrome with Central Sensitization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fernández-de-Las-Peñas, César; Fernández-Muñoz, Juan J; Navarro-Pardo, Esperanza; da-Silva-Pocinho, Ricardo F; Ambite-Quesada, Silvia; Pareja, Juan A</p> <p>2016-09-01</p> <p>Identification of subjects with different sensitization mechanisms can help to identify better therapeutic strategies for carpal tunnel syndrome (CTS). The aim of the current study was to identify subgroups of women with CTS with different levels of sensitization. A total of 223 women with CTS were recruited. Self-reported variables included pain intensity, function, disability, and depression. Pressure pain thresholds (PPT) were assessed bilaterally over median, ulnar, and radial nerves, C5-C6 joint, carpal tunnel, and tibialis anterior to assess widespread pressure pain hyperalgesia. Heat (HPT) and cold (CPT) pain thresholds were also bilaterally assessed over the carpal tunnel and the thenar eminence to determine thermal pain hyperalgesia. Pinch grip force between the thumb and the remaining fingers was calculated to determine motor assessment. Subgroups were determined according to the status on a previous clinical prediction rule: PPT over the affected C5-C6 joint < 137 kPa, HPT on affected carpal tunnel <39.6ºC, and general health >66 points. The ANOVA showed that women within group 1 (positive rule, n = 60) exhibited bilateral widespread pressure hyperalgesia (P < 0.001) and bilateral thermal thresholds (P < 0.001) than those within group 2 (negative rule, n = 162). Women in group 1 also exhibited higher depression than those in group 2 (P = 0.023). No differences in self-reported variables were observed. This study showed that a clinical prediction rule originally developed for identifying women with CTS who are likely to respond favorably to manual physical therapy was able to identify women exhibiting higher widespread pressure hyper-sensitivity and thermal hyperalgesia. This subgroup of women with CTS exhibiting higher sensitization may need specific therapeutic programs. © 2016 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PMB....61.8736B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PMB....61.8736B"><span>Stability of radiomic features in CT perfusion maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.</p> <p>2016-12-01</p> <p>This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate  >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008ClDy...30..321K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008ClDy...30..321K"><span>The dynamics of learning about a climate threshold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keller, Klaus; McInerney, David</p> <p>2008-02-01</p> <p>Anthropogenic greenhouse gas emissions may trigger threshold responses of the climate system. One relevant example of such a potential threshold response is a shutdown of the North Atlantic meridional overturning circulation (MOC). Numerous studies have analyzed the problem of early MOC change detection (i.e., detection before the forcing has committed the system to a threshold response). Here we analyze the early MOC prediction problem. To this end, we virtually deploy an MOC observation system into a simple model that mimics potential future MOC responses and analyze the timing of confident detection and prediction. Our analysis suggests that a confident prediction of a potential threshold response can require century time scales, considerably longer that the time required for confident detection. The signal enabling early prediction of an approaching MOC threshold in our model study is associated with the rate at which the MOC intensity decreases for a given forcing. A faster MOC weakening implies a higher MOC sensitivity to forcing. An MOC sensitivity exceeding a critical level results in a threshold response. Determining whether an observed MOC trend in our model differs in a statistically significant way from an unforced scenario (the detection problem) imposes lower requirements on an observation system than the determination whether the MOC will shut down in the future (the prediction problem). As a result, the virtual observation systems designed in our model for early detection of MOC changes might well fail at the task of early and confident prediction. Transferring this conclusion to the real world requires a considerably refined MOC model, as well as a more complete consideration of relevant observational constraints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4127806','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4127806"><span>Patient-reported immunosuppression nonadherence 6 to 24 months after liver transplant: association with pretransplant psychosocial factors and perceptions of health status change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rodrigue, James R.; Nelson, David R.; Hanto, Douglas W.; Reed, Alan I.; Curry, Michael P.</p> <p>2014-01-01</p> <p>Context Knowing the prevalence and risk factors of immunosuppression nonadherence after liver transplant may help guide intervention development. Objective To examine whether sociodemographic and psychosocial variables before liver transplant are predictive of nonadherence after liver transplant. Design Structured telephone interviews were used to collect self-report immunosuppression adherence and health status information. Medical record reviews were then completed to retrospectively examine the relationship between immunosuppression adherence and pretransplant variables, including sociodemographic and medical characteristics and the presence or absence of 6 hypothesized psychosocial risk factors. Setting and Participants A nonprobability sample of 236 adults 6 to 24 months after liver transplant at 2 centers completed structured telephone interviews. Main Outcome Measure Immunosuppressant medication nonadherence, categorized as missed-dose and altered-dose “adherent” or “nonadherent” during the past 6 months; immunosuppression medication holidays. Results Eighty-two patients (35%) were missed-dose nonadherent and 34 patients (14%) were altered-dose nonadherent. Seventy-one patients (30%) reported 1 or more 24-hour immunosuppression holidays in the past 6 months. Missed-dose nonadherence was predicted by male sex (odds ratio, 2.46; P = .01), longer time since liver transplant (odds ratio, 1.08; P = .01), pretransplant mood disorder (odds ratio, 2.52; P = .004), and pretransplant social support instability (odds ratio, 2.25; P = .03). Altered-dose nonadherence was predicted by pretransplant mood disorder (odds ratio, 2.15; P = .04) and pretransplant social support instability (odds ratio, 1.89; P = .03). Conclusion Rates of immunosuppressant nonadherence and drug holidays in the first 2 years after liver transplant are unacceptably high. Pretransplant mood disorder and social support instability increase the risk of nonadherence, and interventions should target these modifiable risk factors. PMID:24311395</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.842a2067D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.842a2067D"><span>Fatigue limit prediction of ferritic-pearlitic ductile cast iron considering stress ratio and notch size</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Deguchi, T.; Kim, H. J.; Ikeda, T.</p> <p>2017-05-01</p> <p>The mechanical behavior of ductile cast iron is governed by graphite particles and casting defects in the microstructures, which can significantly decrease the fatigue strength. In our previous study, the fatigue limit of ferritic-pearlitic ductile cast iron specimens with small defects ((\\sqrt{{area}}=80˜ 1500{{μ }}{{m}})) could successfully be predicted based on the \\sqrt{{area}} parameter model by using \\sqrt{{area}} as a geometrical parameter of defect as well as the tensile strength as a material parameter. In addition, the fatigue limit for larger defects could be predicted based on the conventional fracture mechanics approach. In this study, rotating bending and tension-compression fatigue tests with ferritic-pearlitic ductile cast iron containing circumferential sharp notches as well as smooth specimens were performed to investigate quantitatively the effects of defect. The notch depths ranged 10 ˜ 2500 μm and the notch root radii were 5 and 50 μm. The stress ratios were R = -1 and 0.1. The microscopic observation of crack propagation near fatigue limit revealed that the fatigue limit was determined by the threshold condition for propagation of a small crack emanating from graphite particles. The fatigue limit could be successfully predicted as a function of R using a method proposed in this study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22108518','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22108518"><span>Specific activity of cyclin-dependent kinase I is a new potential predictor of tumour recurrence in stage II colon cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeestraten, E C M; Maak, M; Shibayama, M; Schuster, T; Nitsche, U; Matsushima, T; Nakayama, S; Gohda, K; Friess, H; van de Velde, C J H; Ishihara, H; Rosenberg, R; Kuppen, P J K; Janssen, K-P</p> <p>2012-01-03</p> <p>There are no established biomarkers to identify tumour recurrence in stage II colon cancer. As shown previously, the enzymatic activity of the cyclin-dependent kinases 1 and 2 (CDK1 and CDK2) predicts outcome in breast cancer. Therefore, we investigated whether CDK activity identifies tumour recurrence in colon cancer. In all, 254 patients with completely resected (R0) UICC stage II colon cancer were analysed retrospectively from two independent cohorts from Munich (Germany) and Leiden (Netherlands). None of the patients received adjuvant treatment. Development of distant metastasis was observed in 27 patients (median follow-up: 86 months). Protein expression and activity of CDKs were measured on fresh-frozen tumour samples. Specific activity (SA) of CDK1 (CDK1SA), but not CDK2, significantly predicted distant metastasis (concordance index=0.69, 95% confidence interval (CI): 0.55-0.79, P=0.036). Cutoff derivation by maximum log-rank statistics yielded a threshold of CDK1SA at 11 (SA units, P=0.029). Accordingly, 59% of patients were classified as high-risk (CDK1SA ≥11). Cox proportional hazard analysis revealed CDK1SA as independent prognostic variable (hazard ratio=6.2, 95% CI: 1.44-26.9, P=0.012). Moreover, CKD1SA was significantly elevated in microsatellite-stable tumours. Specific activity of CDK1 is a promising biomarker for metastasis risk in stage II colon cancer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26412010','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26412010"><span>Feasibility of 30-day hospital readmission prediction modeling based on health information exchange data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Swain, Matthew J; Kharrazi, Hadi</p> <p>2015-12-01</p> <p>Unplanned 30-day hospital readmission account for roughly $17 billion in annual Medicare spending. Many factors contribute to unplanned hospital readmissions and multiple models have been developed over the years to predict them. Most researchers have used insurance claims or administrative data to train and operationalize their Readmission Risk Prediction Models (RRPMs). Some RRPM developers have also used electronic health records data; however, using health informatics exchange data has been uncommon among such predictive models and can be beneficial in its ability to provide real-time alerts to providers at the point of care. We conducted a semi-systematic review of readmission predictive factors published prior to March 2013. Then, we extracted and merged all significant variables listed in those articles for RRPMs. Finally, we matched these variables with common HL7 messages transmitted by a sample of health information exchange organizations (HIO). The semi-systematic review resulted in identification of 32 articles and 297 predictive variables. The mapping of these variables with common HL7 segments resulted in an 89.2% total coverage, with the DG1 (diagnosis) segment having the highest coverage of 39.4%. The PID (patient identification) and OBX (observation results) segments cover 13.9% and 9.1% of the variables. Evaluating the same coverage in three sample HIOs showed data incompleteness. HIOs can utilize HL7 messages to develop unique RRPMs for their stakeholders; however, data completeness of exchanged messages should meet certain thresholds. If data quality standards are met by stakeholders, HIOs would be able to provide real-time RRPMs that not only predict intra-hospital readmissions but also inter-hospital cases. A RRPM derived using HIO data exchanged through may prove to be a useful method to prevent unplanned hospital readmissions. In order for the RRPM derived from HIO data to be effective, hospitals must actively exchange clinical information through the HIO and develop actionable methods that integrate into the workflow of providers to ensure that patients at high-risk for readmission receive the care they need. Copyright © 2015. Published by Elsevier Ireland Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JETPL.103..752Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JETPL.103..752Z"><span>Surface ablation of aluminum and silicon by ultrashort laser pulses of variable width</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Makarov, S. V.; Kuchmizhak, A. A.; Vitrik, O. B.; Kulchin, Yu. N.</p> <p>2016-06-01</p> <p>Single-shot thresholds of surface ablation of aluminum and silicon via spallative ablation by infrared (IR) and visible ultrashort laser pulses of variable width τlas (0.2-12 ps) have been measured by optical microscopy. For increasing laser pulse width τlas < 3 ps, a drastic (threefold) drop of the ablation threshold of aluminum has been observed for visible pulses compared to an almost negligible threshold variation for IR pulses. In contrast, the ablation threshold in silicon increases threefold with increasing τlas for IR pulses, while the corresponding thresholds for visible pulses remained almost constant. In aluminum, such a width-dependent decrease in ablation thresholds has been related to strongly diminished temperature gradients for pulse widths exceeding the characteristic electron-phonon thermalization time. In silicon, the observed increase in ablation thresholds has been ascribed to two-photon IR excitation, while in the visible range linear absorption of the material results in almost constant thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28129190','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28129190"><span>Identifying a Probabilistic Boolean Threshold Network From Samples.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya</p> <p>2018-04-01</p> <p>This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21282950','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21282950"><span>The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cognitive impairment: item analyses and threshold scores.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H</p> <p>2011-01-01</p> <p>To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70000156','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70000156"><span>Use of complex hydraulic variables to predict the distribution and density of unionids in a side channel of the Upper Mississippi River</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Steuer, J.J.; Newton, T.J.; Zigler, S.J.</p> <p>2008-01-01</p> <p>Previous attempts to predict the importance of abiotic and biotic factors to unionids in large rivers have been largely unsuccessful. Many simple physical habitat descriptors (e.g., current velocity, substrate particle size, and water depth) have limited ability to predict unionid density. However, more recent studies have found that complex hydraulic variables (e.g., shear velocity, boundary shear stress, and Reynolds number) may be more useful predictors of unionid density. We performed a retrospective analysis with unionid density, current velocity, and substrate particle size data from 1987 to 1988 in a 6-km reach of the Upper Mississippi River near Prairie du Chien, Wisconsin. We used these data to model simple and complex hydraulic variables under low and high flow conditions. We then used classification and regression tree analysis to examine the relationships between hydraulic variables and unionid density. We found that boundary Reynolds number, Froude number, boundary shear stress, and grain size were the best predictors of density. Models with complex hydraulic variables were a substantial improvement over previously published discriminant models and correctly classified 65-88% of the observations for the total mussel fauna and six species. These data suggest that unionid beds may be constrained by threshold limits at both ends of the flow regime. Under low flow, mussels may require a minimum hydraulic variable (Rez.ast;, Fr) to transport nutrients, oxygen, and waste products. Under high flow, areas with relatively low boundary shear stress may provide a hydraulic refuge for mussels. Data on hydraulic preferences and identification of other conditions that constitute unionid habitat are needed to help restore and enhance habitats for unionids in rivers. ?? 2008 Springer Science+Business Media B.V.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD0617412','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD0617412"><span>ADAPTIVE THRESHOLD LOGIC.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p></p> <p>The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29055552','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29055552"><span>Describing temporal variation in reticuloruminal pH using continuous monitoring data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Denwood, M J; Kleen, J L; Jensen, D B; Jonsson, N N</p> <p>2018-01-01</p> <p>Reticuloruminal pH has been linked to subclinical disease in dairy cattle, leading to considerable interest in identifying pH observations below a given threshold. The relatively recent availability of continuously monitored data from pH boluses gives new opportunities for characterizing the normal patterns of pH over time and distinguishing these from abnormal patterns using more sensitive and specific methods than simple thresholds. We fitted a series of statistical models to continuously monitored data from 93 animals on 13 farms to characterize normal variation within and between animals. We used a subset of the data to relate deviations from the normal pattern to the productivity of 24 dairy cows from a single herd. Our findings show substantial variation in pH characteristics between animals, although animals within the same farm tended to show more consistent patterns. There was strong evidence for a predictable diurnal variation in all animals, and up to 70% of the observed variation in pH could be explained using a simple statistical model. For the 24 animals with available production information, there was also a strong association between productivity (as measured by both milk yield and dry matter intake) and deviations from the expected diurnal pattern of pH 2 d before the productivity observation. In contrast, there was no association between productivity and the occurrence of observations below a threshold pH. We conclude that statistical models can be used to account for a substantial proportion of the observed variability in pH and that future work with continuously monitored pH data should focus on deviations from a predictable pattern rather than the frequency of observations below an arbitrary pH threshold. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20812890','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20812890"><span>Validity and reliability of in-situ air conduction thresholds measured through hearing aids coupled to closed and open instant-fit tips.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey</p> <p>2010-12-01</p> <p>Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28646233','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28646233"><span>Identifying community thresholds for lotic benthic diatoms in response to human disturbance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Tao; Tang, Ting; Tan, Lu; Gu, Yuan; Jiang, Wanxiang; Cai, Qinghua</p> <p>2017-06-23</p> <p>Although human disturbance indirectly influences lotic assemblages through modifying physical and chemical conditions, identifying thresholds of human disturbance would provide direct evidence for preventing anthropogenic degradation of biological conditions. In the present study, we used data obtained from tributaries of the Three Gorges Reservoir in China to detect effects of human disturbance on streams and to identify disturbance thresholds for benthic diatoms. Diatom species composition was significantly affected by three in-stream stressors including TP, TN and pH. Diatoms were also influenced by watershed % farmland and natural environmental variables. Considering three in-stream stressors, TP was positively influenced by % farmland and % impervious surface area (ISA). In contrast, TN and pH were principally affected by natural environmental variables. Among measured natural environmental variables, average annual air temperature, average annual precipitation, and topsoil % CaCO 3 , % gravel, and total exchangeable bases had significant effects on study streams. When effects of natural variables were accounted for, substantial compositional changes in diatoms occurred when farmland or ISA land use exceeded 25% or 0.3%, respectively. Our study demonstrated the rationale for identifying thresholds of human disturbance for lotic assemblages and addressed the importance of accounting for effects of natural factors for accurate disturbance thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JARS...11b5010H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JARS...11b5010H"><span>Multiratio fusion change detection with adaptive thresholding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.</p> <p>2017-04-01</p> <p>A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19167600','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19167600"><span>Relationship between compliance and persistence with osteoporosis medications and fracture risk in primary health care in France: a retrospective case-control analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cotté, François-Emery; Mercier, Florence; De Pouvourville, Gérard</p> <p>2008-12-01</p> <p>Nonadherence to treatment is an important determinant of long-term outcomes in women with osteoporosis. This study was conducted to investigate the association between adherence and osteoporotic fracture risk and to identify optimal thresholds for good compliance and persistence. A secondary objective was to perform a preliminary evaluation of the cost consequences of adherence. This was a retrospective case-control analysis. Data were derived from the Thales prescription database, which contains information on >1.6 million patients in the primary health care setting in France. Cases were women aged >or=50 years who had an osteoporosis-related fracture in 2006. For each case, 5 matched controls were randomly selected. Both compliance and persistence aspects of treatment adherence were examined. Compliance was estimated based on the medication possession ratio (MPR). Persistence was calculated as the time from the initial filling of a prescription for osteoporosis medication until its discontinuation. The mean (SD) MPR was lower in cases compared with controls (58.8% [34.7%] vs 72.1% [28.8%], respectively; P < 0.001). Cases were more likely than controls to discontinue osteoporosis treatment (50.0% vs 25.3%; P < 0.001), yielding a significantly lower proportion of patients who were still persistent at 1 year (34.1% vs 40.9%; P < 0.001). MPR was the best predictor of fracture risk, with an area under the receiver-operating-characteristic curve that was higher than that for persistence (0.59 vs 0.55). The optimal MPR threshold for predicting fracture risk was >or=68.0%. Compared with less-compliant women, women who achieved this threshold had a 51% reduction in fracture risk. The difference in annual drug expenditure between women achieving this threshold and those who did not was approximately euro300. The optimal threshold for persistence with therapy was at least 6 months. Attaining this threshold was associated with a 28% reduction in fracture risk compared with less-persistent women. In this study, better treatment adherence was associated with a greater reduction in fracture risk. Compliance appeared to predict fracture risk better than did persistence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4771033','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4771033"><span>Comparing Binaural Pre-processing Strategies III</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Warzybok, Anna; Ernst, Stephan M. A.</p> <p>2015-01-01</p> <p>A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910061652&hterms=levels+mathematics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DA%2Blevels%2Bmathematics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910061652&hterms=levels+mathematics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DA%2Blevels%2Bmathematics"><span>Theoretical branching ratios for the 5I7 to 5I7 levels of Ho(3+) in the garnets A3B2C3O12 (A = Y,La,Lu,Gd; B = Al,Lu,Sc,Ga; C = Al,Ga)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Filer, Elizabeth D.; Morrison, Clyde A.; Turner, Gregory A.; Barnes, Norman P.</p> <p>1991-01-01</p> <p>Results are reported from an experimental study investigating triply ionized holmium in 10 garnets using the point-change model to predict theoretical energy levels and temperature-dependent branching ratios for the 5I7 to 5I8 manifolds for temperatures between 50 and 400 K. Plots were made for the largest lines at 300 K. YScAG was plotted twice, once for each set of X-ray data available. Energy levels are predicted based on theoretical crystal-field parameters, and good agreement to experiment is found. It is suggested that the present set of theoretical crystal-field parameters provides good estimates of the energy levels for the other hosts on which there are no experimental optical data. X-ray and index-of-refraction data are used to evaluate the performance of 10 lasers via a quantum mechanical model to predict the position of the energy levels and the temperature-dependent branching rations of the 5I7 to 5I8 levels of holmium. The fractional population inversion required for threshold is also evaluated.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19770016625','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19770016625"><span>Detection of short-term changes in vegetation cover by use of LANDSAT imagery. [Arizona</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Turner, R. M. (Principal Investigator); Wiseman, F. M.</p> <p>1975-01-01</p> <p>The author has identified the following significant results. By using a constant band 6 to band 5 radiance ratio of 1.25, the changing pattern of areas of relatively dense vegetation cover was detected for the semiarid region in the vicinity of Tucson, Arizona. Electronically produced binary thematic masks were used to map areas with dense vegetation. The foliar cover threshold represented by the ratio was not accurately determined but field measurements show that the threshold lies in the range of 10 to 25 percent foliage cover. Montana evergreen forests with constant dense cover were correctly shown to exceed the threshold on all dates. The summer active grassland exceeded the threshold in the summer unless rainfall was insufficient. Desert areas exceeded the threshold during the spring of 1973 following heavy rains; the same areas during the rainless spring of 1974 did not exceed threshold. Irrigated fields, parks, golf courses, and riparian communities were among the habitats most frequently surpassing the threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27363786','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27363786"><span>The Maternal Legacy: Female Identity Predicts Offspring Sex Ratio in the Loggerhead Sea Turtle.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Reneker, Jaymie L; Kamel, Stephanie J</p> <p>2016-07-01</p> <p>In organisms with temperature-dependent sex determination, the incubation environment plays a key role in determining offspring sex ratios. Given that global temperatures have warmed approximately 0.6 °C in the last century, it is necessary to consider how organisms will adjust to climate change. To better understand the degree to which mothers influence the sex ratios of their offspring, we use 24 years of nesting data for individual female loggerhead sea turtles (Caretta caretta) observed on Bald Head Island, North Carolina. We find that maternal identity is the best predictor of nest sex ratio in univariate and multivariate predictive models. We find significant variability in estimated nest sex ratios among mothers, but a high degree of consistency within mothers, despite substantial spatial and temporal thermal variation. Our results suggest that individual differences in nesting preferences are the main driver behind divergences in nest sex ratios. As such, a female's ability to plastically adjust her nest sex ratios in response to environmental conditions is constrained, potentially limiting how individuals behaviorally mitigate the effects of environmental change. Given that many loggerhead populations already show female-biased offspring sex ratios, understanding maternal behavioral responses is critical for predicting the future of long-lived species vulnerable to extinction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4929680','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4929680"><span>The Maternal Legacy: Female Identity Predicts Offspring Sex Ratio in the Loggerhead Sea Turtle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Reneker, Jaymie L.; Kamel, Stephanie J.</p> <p>2016-01-01</p> <p>In organisms with temperature-dependent sex determination, the incubation environment plays a key role in determining offspring sex ratios. Given that global temperatures have warmed approximately 0.6 °C in the last century, it is necessary to consider how organisms will adjust to climate change. To better understand the degree to which mothers influence the sex ratios of their offspring, we use 24 years of nesting data for individual female loggerhead sea turtles (Caretta caretta) observed on Bald Head Island, North Carolina. We find that maternal identity is the best predictor of nest sex ratio in univariate and multivariate predictive models. We find significant variability in estimated nest sex ratios among mothers, but a high degree of consistency within mothers, despite substantial spatial and temporal thermal variation. Our results suggest that individual differences in nesting preferences are the main driver behind divergences in nest sex ratios. As such, a female’s ability to plastically adjust her nest sex ratios in response to environmental conditions is constrained, potentially limiting how individuals behaviorally mitigate the effects of environmental change. Given that many loggerhead populations already show female-biased offspring sex ratios, understanding maternal behavioral responses is critical for predicting the future of long-lived species vulnerable to extinction. PMID:27363786</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4445374','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4445374"><span>Estimating daily climatologies for climate indices derived from climate model data and observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof</p> <p>2015-01-01</p> <p>Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27097684','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27097684"><span>Explanatory factors and predictors of fatigue in persons with rheumatoid arthritis: A longitudinal study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feldthusen, Caroline; Grimby-Ekman, Anna; Forsblad-d'Elia, Helena; Jacobsson, Lennart; Mannerkorpi, Kaisa</p> <p>2016-04-28</p> <p>To investigate the impact of disease-related aspects on long-term variations in fatigue in persons with rheumatoid arthritis. Observational longitudinal study. Sixty-five persons with rheumatoid arthritis, age range 20-65 years, were invited to a clinical examination at 4 time-points during the 4 seasons. Outcome measures were: general fatigue rated on visual analogue scale (0-100) and aspects of fatigue assessed by the Bristol Rheumatoid Arthritis Fatigue Multidimensional Questionnaire. Disease-related variables were: disease activity (erythrocyte sedimentation rate), pain threshold (pressure algometer), physical capacity (six-minute walk test), pain (visual analogue scale (0-100)), depressive mood (Hospital Anxiety and Depression scale, depression subscale), personal factors (age, sex, body mass index) and season. Multivariable regression analysis, linear mixed effects models were applied. The strongest explanatory factors for all fatigue outcomes, when recorded at the same time-point as fatigue, were pain threshold and depressive mood. Self-reported pain was an explanatory factor for physical aspects of fatigue and body mass index contributed to explaining the consequences of fatigue on everyday living. For predicting later fatigue pain threshold and depressive mood were the strongest predictors. Pain threshold and depressive mood were the most important factors for fatigue in persons with rheumatoid arthritis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25339524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25339524"><span>Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen</p> <p>2015-03-01</p> <p>The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5461255','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5461255"><span>The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.</p> <p>2017-01-01</p> <p>Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22452345','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22452345"><span>Use of plethysmographic variability index derived from the Massimo(®) pulse oximeter to predict fluid or preload responsiveness: a systematic review and meta-analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yin, J Y; Ho, K M</p> <p>2012-07-01</p> <p>This systematic review and meta-analysis assessed the accuracy of plethysmographic variability index derived from the Massimo(®) pulse oximeter to predict preload responsiveness in peri-operative and critically ill patients. A total of 10 studies were retrieved from the literature, involving 328 patients who met the selection criteria. Overall, the diagnostic odds ratio (16.0; 95% CI 5-48) and area under the summary receiver operating characteristic curve (0.87; 95% CI 0.78-0.95) for plethysmographic variability index to predict fluid or preload responsiveness was very good, but significant heterogeneity existed. This could be explained by a lower accuracy of plethysmographic variability index in spontaneously breathing or paediatric patients and those studies that used pre-load challenges other than colloid fluid. The results indicate specific directions for future studies. Anaesthesia © 2012 The Association of Anaesthetists of Great Britain and Ireland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28626084','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28626084"><span>Optimization of RAS/BRAF Mutational Analysis Confirms Improvement in Patient Selection for Clinical Benefit to Anti-EGFR Treatment in Metastatic Colorectal Cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Santos, Cristina; Azuara, Daniel; Garcia-Carbonero, Rocio; Alfonso, Pilar Garcia; Carrato, Alfredo; Elez, Mª Elena; Gomez, Auxiliadora; Losa, Ferran; Montagut, Clara; Massuti, Bartomeu; Navarro, Valenti; Varela, Mar; Lopez-Doriga, Adriana; Moreno, Victor; Valladares, Manuel; Manzano, Jose Luis; Vieitez, Jose Maria; Aranda, Enrique; Sanjuan, Xavier; Tabernero, Josep; Capella, Gabriel; Salazar, Ramon</p> <p>2017-09-01</p> <p>In metastatic colorectal cancer (mCRC), recent studies have shown the importance to accurately quantify low-abundance mutations of the RAS pathway because anti-EGFR therapy may depend on certain mutation thresholds. We aimed to evaluate the added predictive value of an extended RAS panel testing using two commercial assays and a highly sensitive and quantitative digital PCR (dPCR). Tumor samples from 583 mCRC patients treated with anti-EGFR- ( n = 255) or bevacizumab- ( n = 328) based therapies from several clinical trials and retrospective series from the TTD/RTICC Spanish network were analyzed by cobas, therascreen , and dPCR. We evaluated concordance between techniques using the Cohen kappa index. Response rate, progression-free survival (PFS), and overall survival (OS) were correlated to the mutational status and the mutant allele fraction (MAF). Concordance between techniques was high when analyzing RAS and BRAF (Cohen kappa index around 0.75). We observed an inverse correlation between MAF and response in the anti-EGFR cohort ( P < 0.001). Likelihood ratio analysis showed that a fraction of 1% or higher of any mutated alleles offered the best predictive value. PFS and OS were significantly longer in RAS / BRAF wild-type patients, independently of the technique. However, the predictability of both PFS and OS were higher when we considered a threshold of 1% in the RAS scenario (HR = 1.53; CI 95%, 1.12-2.09 for PFS, and HR = 1.9; CI 95%, 1.33-2.72 for OS). Although the rate of mutations observed among techniques is different, RAS and BRAF mutational analysis improved prediction of response to anti-EGFR therapy. Additionally, dPCR with a threshold of 1% outperformed the other platforms. Mol Cancer Ther; 16(9); 1999-2007. ©2017 AACR . ©2017 American Association for Cancer Research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21419659','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21419659"><span>Restrictive allograft syndrome (RAS): a novel form of chronic lung allograft dysfunction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sato, Masaaki; Waddell, Thomas K; Wagnetz, Ute; Roberts, Heidi C; Hwang, David M; Haroon, Ayesha; Wagnetz, Dirk; Chaparro, Cecilia; Singer, Lianne G; Hutcheon, Michael A; Keshavjee, Shaf</p> <p>2011-07-01</p> <p>Bronchiolitis obliterans syndrome (BOS) with small-airway pathology and obstructive pulmonary physiology may not be the only form of chronic lung allograft dysfunction (CLAD) after lung transplantation. Characteristics of a form of CLAD consisting of restrictive functional changes involving peripheral lung pathology were investigated. Patients who received bilateral lung transplantation from 1996 to 2009 were retrospectively analyzed. Baseline pulmonary function was taken as the time of peak forced expiratory volume in 1 second (FEV(1)). CLAD was defined as irreversible decline in FEV(1) < 80% baseline. The most accurate threshold to predict irreversible decline in total lung capacity and thus restrictive functional change was at 90% baseline. Restrictive allograft syndrome (RAS) was defined as CLAD meeting this threshold. BOS was defined as CLAD without RAS. To estimate the effect on survival, Cox proportional hazards models and Kaplan-Meier analyses were used. Among 468 patients, CLAD developed in 156; of those, 47 (30%) showed the RAS phenotype. Compared with the 109 BOS patients, RAS patients showed significant computed tomography findings of interstitial lung disease (p < 0.0001). Prevalence of RAS was approximately 25% to 35% of all CLAD over time. Patient survival of RAS was significantly worse than BOS after CLAD onset (median survival, 541 vs 1,421 days; p = 0.0003). The RAS phenotype was the most significant risk factor of death among other variables after CLAD onset (hazard ratio, 1.60; confidential interval, 1.23-2.07). RAS is a novel form of CLAD that exhibits characteristics of peripheral lung fibrosis and significantly affects survival of lung transplant patients. Copyright © 2011 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29432467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29432467"><span>A drift-diffusion checkpoint model predicts a highly variable and growth-factor-sensitive portion of the cell cycle G1 phase.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jones, Zack W; Leander, Rachel; Quaranta, Vito; Harris, Leonard A; Tyson, Darren R</p> <p>2018-01-01</p> <p>Even among isogenic cells, the time to progress through the cell cycle, or the intermitotic time (IMT), is highly variable. This variability has been a topic of research for several decades and numerous mathematical models have been proposed to explain it. Previously, we developed a top-down, stochastic drift-diffusion+threshold (DDT) model of a cell cycle checkpoint and showed that it can accurately describe experimentally-derived IMT distributions [Leander R, Allen EJ, Garbett SP, Tyson DR, Quaranta V. Derivation and experimental comparison of cell-division probability densities. J. Theor. Biol. 2014;358:129-135]. Here, we use the DDT modeling approach for both descriptive and predictive data analysis. We develop a custom numerical method for the reliable maximum likelihood estimation of model parameters in the absence of a priori knowledge about the number of detectable checkpoints. We employ this method to fit different variants of the DDT model (with one, two, and three checkpoints) to IMT data from multiple cell lines under different growth conditions and drug treatments. We find that a two-checkpoint model best describes the data, consistent with the notion that the cell cycle can be broadly separated into two steps: the commitment to divide and the process of cell division. The model predicts one part of the cell cycle to be highly variable and growth factor sensitive while the other is less variable and relatively refractory to growth factor signaling. Using experimental data that separates IMT into G1 vs. S, G2, and M phases, we show that the model-predicted growth-factor-sensitive part of the cell cycle corresponds to a portion of G1, consistent with previous studies suggesting that the commitment step is the primary source of IMT variability. These results demonstrate that a simple stochastic model, with just a handful of parameters, can provide fundamental insights into the biological underpinnings of cell cycle progression.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027818','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027818"><span>Drainage networks after wildfire</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kinner, D.A.; Moody, J.A.</p> <p>2005-01-01</p> <p>Predicting runoff and erosion from watersheds burned by wildfires requires an understanding of the three-dimensional structure of both hillslope and channel drainage networks. We investigate the small-and large-scale structures of drainage networks using field studies and computer analysis of 30-m digital elevation model. Topologic variables were derived from a composite 30-m DEM, which included 14 order 6 watersheds within the Pikes Peak batholith. Both topologic and hydraulic variables were measured in the field in two smaller burned watersheds (3.7 and 7.0 hectares) located within one of the order 6 watersheds burned by the 1996 Buffalo Creek Fire in Central Colorado. Horton ratios of topologic variables (stream number, drainage area, stream length, and stream slope) for small-scale and large-scale watersheds are shown to scale geometrically with stream order (i.e., to be scale invariant). However, the ratios derived for the large-scale drainage networks could not be used to predict the rill and gully drainage network structure. Hydraulic variables (width, depth, cross-sectional area, and bed roughness) for small-scale drainage networks were found to be scale invariant across 3 to 4 stream orders. The relation between hydraulic radius and cross-sectional area is similar for rills and gullies, suggesting that their geometry can be treated similarly in hydraulic modeling. Additionally, the rills and gullies have relatively small width-to-depth ratios, implying sidewall friction may be important to the erosion and evolutionary process relative to main stem channels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4620760','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4620760"><span>Diagnostic Accuracy of APRI, AAR, FIB-4, FI, King, Lok, Forns, and FibroIndex Scores in Predicting the Presence of Esophageal Varices in Liver Cirrhosis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Deng, Han; Qi, Xingshun; Guo, Xiaozhong</p> <p>2015-01-01</p> <p>Abstract Aspartate aminotransferase-to-platelet ratio (APRI), aspartate aminotransferase-to-alanine aminotransferase ratio (AAR), FIB-4, FI, King, Lok, Forns, and FibroIndex scores may be simple and convenient noninvasive diagnostic tests, because they are based on the regular laboratory tests and demographic data. This study aimed to systematically evaluate their diagnostic accuracy for the prediction of varices in liver cirrhosis. All relevant papers were searched via PubMed, EMBASE, CNKI, and Wanfang databases. The area under the summary receiver operating characteristic curve (AUSROC), sensitivity, specificity, positive and negative likelihood ratio (PLR and NLR), and diagnostic odds ratio (DOR) were calculated. Overall, 12, 4, 5, 0, 0, 4, 3, and 1 paper was identified to explore the diagnostic accuracy of APRI, AAR, FIB-4, FI, King, Lok, Forns, and FibroIndex scores, respectively. The AUSROCs of APRI, AAR, FIB-4, Lok, and Forns scores for the prediction of varices were 0.6774, 0.7275, 0.7755, 0.7885, and 0.7517, respectively; and those for the prediction of large varices were 0.7278, 0.7448, 0.7095, 0.7264, and 0.6530, respectively. The diagnostic threshold effects of FIB-4 and Forns scores for the prediction of varices were statistically significant. The sensitivities/specificities/PLRs/NLRs/DORs of APRI, AAR, and Lok scores for the prediction of varices were 0.60/0.67/1.77/0.58/3.13, 0.64/0.63/1.97/0.54/4.18, and 0.74/0.68/2.34/0.40/5.76, respectively. The sensitivities/specificities/PLRs/NLRs/DORs of APRI, AAR, FIB-4, Lok, and Forns scores for the prediction of large varices were 0.65/0.66/2.15/0.47/4.97, 0.68/0.58/2.07/0.54/3.93, 0.62/0.64/2.02/0.56/3.57, 0.78/0.63/2.09/0.37/5.55, and 0.65/0.61/1.62/0.59/2.75, respectively. APRI, AAR, FIB-4, Lok, and Forns scores had low to moderate diagnostic accuracy in predicting the presence of varices in liver cirrhosis. PMID:26496312</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1253596','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1253596"><span>Constraints on Models for the Higgs Boson with Exotic Spin and Parity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Johnson, Emily Hannah</p> <p></p> <p>The production of a Higgs boson in association with a vector boson at the Tevatron offers a unique opportunity to study models for the Higgs boson with exotic spin J and parity P assignments. At the Tevatron the V H system is produced near threshold. Different JP assignments of the Higgs boson can be distinguished by examining the behavior of the cross section near threshold. The relatively low backgrounds at the Tevatron compared to the LHC put us in a unique position to study the direct decay of the Higgs boson to fermions. If the Higgs sector is more complexmore » than predicted, studying the spin and parity of the Higgs boson in all decay modes is important. In this Thesis we will examine the WH → ℓνb¯b production and decay mode using 9.7 fb -1 of data collected by the D0 experiment in an attempt to derive constraints on models containing exotic values for the spin and parity of the Higgs boson. In particular, we will examine models for a Higgs boson with JP = 0- and JP = 2+. We use a likelihood ratio to quantify the degree to which our data are incompatible with exotic JP predictions for a range of possible production rates. Assuming the production cross section times branching ratio of the signals in the models considered is equal to the standard model prediction, the WH → ℓνb¯b mode alone is unable to reject either exotic model considered. We will also discuss the combination of the ZH → ℓℓb¯b, WH → ℓνb¯b, and V H → ννb¯b production modes at the D0 experiment and with the CDF experiment. When combining all three production modes at the D0 experiment we reject the JP = 0- and JP = 2+ hypotheses at the 97.6% CL and at the 99.0% CL, respectively, when assuming the signal production cross section times branching ratio is equal to the standard model predicted value. When combining with the CDF experiment we reject the JP = 0- and JP = 2+ hypotheses with significances of 5.0 standard deviations and 4.9 standard deviations, respectively.abstract« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3687062','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3687062"><span>Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa</p> <p>2013-01-01</p> <p>Objectives Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. Methods The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. Results In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). Conclusion VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD. PMID:23799160</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA578384','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA578384"><span>Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-10-25</p> <p>of hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean molecular weight (MWave) of...diffusive soot extinction configurations. Matching the “real fuel combustion property targets” of hydrogen/ carbon molar ratio (H/C), derived cetane number...combustion property targets - hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009CPL...478...11Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009CPL...478...11Z"><span>CH 3NO 2 decomposition/isomerization mechanism and product branching ratios: An ab initio chemical kinetic study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhu, R. S.; Lin, M. C.</p> <p>2009-08-01</p> <p>The low-lying energy pathways for the decomposition/isomerization of nitromethane (NM) have been investigated using different molecular orbital methods. Our results show that in addition to the commonly known CH 3 + NO 2 products formed by direct C-N bond breaking and the trans-CH 3ONO formed by nitro-nitrite isomerization, NM can also isomerize to cis-CH 3ONO via a very loose transition state (TS) lying 59.2 kcal/mol above CH 3NO 2 or 0.6 kcal/mol below the CH 3 + NO 2 asymptote predicted at the UCCSD(T)/CBS level of theory. Kinetic results indicate that in the energy range of 59 ± 1 kcal/mol, production of CH 3O + NO is dominant, whereas above the C-N bond breaking threshold, the formation of CH 3 + NO 2 sharply increases and becomes dominant. The k( E) values predicted at different energies clearly indicate that CH 3O + NO could be detected in an infrared multi-photon dissociation study, whereas in UV dissociation experiments with energies high above the C-N bond breaking threshold the CH 3 + NO 2 products are generated predominantly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27885592','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27885592"><span>Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq</p> <p>2017-06-01</p> <p>The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18638300','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18638300"><span>Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ellner, Stephen P; Holmes, Elizabeth E</p> <p>2008-08-01</p> <p>We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16328165','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16328165"><span>The threshold signal:noise ratio in the perception of fragmented figures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Merkul'ev, A V; Pronin, S V; Semenov, L A; Foreman, N; Chikhman, V N; Shelepin, Yu E</p> <p>2006-01-01</p> <p>Perception thresholds were measured for fragmented outline figures (the Gollin test). A new approach to the question of the perception of incomplete images was developed. In this approach, figure fragmentation consisted of masking with multiplicative texture-like noise--this interference was termed "invisible" masking. The first series of studies established that the "similarity" between the amplitude-frequency spectra of test figures and "invisible" masks, expressed as a linear correlation coefficient, had significant effects on the recognition thresholds of these figures. The second series of experiments showed that progressing formation of the figures was accompanied by increases in the correlation between their spatial-frequency characteristics and the corresponding characteristics of the incomplete figure, while the correlation with the "invisible" mask decreased. It is suggested that the ratio of the correlation coefficients, characterizing the "similarity" of the fragmented figure with the intact figure and the "invisible" mask, corresponds to the signal:noise ratio. The psychophysical recognition threshold for figures for naive subjects not familiar with the test image alphabet was reached after the particular level of fragmentation at which this ratio was unity.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5153921','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5153921"><span>Cost–effectiveness thresholds: pros and cons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R</p> <p>2016-01-01</p> <p>Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Geomo.290...39P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Geomo.290...39P"><span>Rainfall thresholds for possible landslide occurrence in Italy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peruccacci, Silvia; Brunetti, Maria Teresa; Gariano, Stefano Luigi; Melillo, Massimo; Rossi, Mauro; Guzzetti, Fausto</p> <p>2017-08-01</p> <p>The large physiographic variability and the abundance of landslide and rainfall data make Italy an ideal site to investigate variations in the rainfall conditions that can result in rainfall-induced landslides. We used landslide information obtained from multiple sources and rainfall data captured by 2228 rain gauges to build a catalogue of 2309 rainfall events with - mostly shallow - landslides in Italy between January 1996 and February 2014. For each rainfall event with landslides, we reconstructed the rainfall history that presumably caused the slope failure, and we determined the corresponding rainfall duration D (in hours) and cumulated event rainfall E (in mm). Adopting a power law threshold model, we determined cumulated event rainfall-rainfall duration (ED) thresholds, at 5% exceedance probability, and their uncertainty. We defined a new national threshold for Italy, and 26 regional thresholds for environmental subdivisions based on topography, lithology, land-use, land cover, climate, and meteorology, and we used the thresholds to study the variations of the rainfall conditions that can result in landslides in different environments, in Italy. We found that the national and the environmental thresholds cover a small part of the possible DE domain. The finding supports the use of empirical rainfall thresholds for landslide forecasting in Italy, but poses an empirical limitation to the possibility of defining thresholds for small geographical areas. We observed differences between some of the thresholds. With increasing mean annual precipitation (MAP), the thresholds become higher and steeper, indicating that more rainfall is needed to trigger landslides where the MAP is high than where it is low. This suggests that the landscape adjusts to the regional meteorological conditions. We also observed that the thresholds are higher for stronger rocks, and that forested areas require more rainfall than agricultural areas to initiate landslides. Finally, we observed that a 20% exceedance probability national threshold was capable of predicting all the rainfall-induced landslides with casualties between 1996 and 2014, and we suggest that this threshold can be used to forecast fatal rainfall-induced landslides in Italy. We expect the method proposed in this work to define and compare the thresholds to have an impact on the definition of new rainfall thresholds for possible landslide occurrence in Italy, and elsewhere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21823817','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21823817"><span>Calculating the dim light melatonin onset: the impact of threshold and sampling rate.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Molina, Thomas A; Burgess, Helen J</p> <p>2011-10-01</p> <p>The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29037330','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29037330"><span>Waist circumference shows the highest predictive value for metabolic syndrome, and waist-to-hip ratio for its components, in Spanish adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Perona, Javier S; Schmidt-RioValle, Jacqueline; Rueda-Medina, Blanca; Correa-Rodríguez, María; González-Jiménez, Emilio</p> <p>2017-09-01</p> <p>Both waist circumference (WC) and waist-to-hip ratio (WHR) have been proposed as predictors of metabolic syndrome (MetS) in adolescents, but no consensus has been reached to date. This study hypothesizes that WC provides a greater predictive value for MetS in Spanish adolescents than WHR. A cross-sectional study was performed on 1001 adolescents (13.2 ± 1.2 years) randomly recruited from schools in southeast Spain. Anthropometric measures were correlated with the components of MetS (triglycerides, glucose, blood pressure, and high-density lipoprotein cholesterol) as well as inflammation markers (interleukin-6 and tumor necrosis factor-alpha , C-reactive protein, and ceruloplasmin). Receiver-operator curves were created to determine the predictive value of these variables for MetS. Boys had higher values of all anthropometric parameters compared with girls, but the prevalence of MetS was significantly higher in girls. WHR was the only parameter that correlated significantly with all biochemical and inflammatory variables in boys. In girls, WHR, body mass index, waist-to-height ratio, WC, and body fat percentage correlated only with plasma insulin levels, systolic and diastolic pressures, and ceruloplasmin. In both groups, all anthropometric measures were able to predict MetS (area under the curve > 0.94). In particular, WC was able to predict MetS with area under the curve = 1.00. However, WHR was able to predict a higher number of components of MetS. WHR was the anthropometric index that showed the highest predictive value for MetS components, whereas WC was the one that best predicted the MetS among the population of adolescents studied. These findings justify the need to incorporate WHR and WC determinations into daily clinical practice to predict the MetS. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003EAEJA.....1050M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003EAEJA.....1050M"><span>Uncertainty prediction for PUB</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mendiondo, E. M.; Tucci, C. M.; Clarke, R. T.; Castro, N. M.; Goldenfum, J. A.; Chevallier, P.</p> <p>2003-04-01</p> <p>IAHS’ initiative of Prediction in Ungaged Basins (PUB) attempts to integrate monitoring needs and uncertainty prediction for river basins. This paper outlines alternative ways of uncertainty prediction which could be linked with new blueprints for PUB, thereby showing how equifinality-based models should be grasped using practical strategies of gauging like the Nested Catchment Experiment (NCE). Uncertainty prediction is discussed from observations of Potiribu Project, which is a NCE layout at representative basins of a suptropical biome of 300,000 km2 in South America. Uncertainty prediction is assessed at the microscale (1 m2 plots), at the hillslope (0,125 km2) and at the mesoscale (0,125 - 560 km2). At the microscale, uncertainty-based models are constrained by temporal variations of state variables with changing likelihood surfaces of experiments using Green-Ampt model. Two new blueprints emerged from this NCE for PUB: (1) the Scale Transferability Scheme (STS) at the hillslope scale and the Integrating Process Hypothesis (IPH) at the mesoscale. The STS integrates a multi-dimensional scaling with similarity thresholds, as a generalization of the Representative Elementary Area (REA), using spatial correlation from point (distributed) to area (lumped) process. In this way, STS addresses uncertainty-bounds of model parameters, into an upscaling process at the hillslope. In the other hand, the IPH approach regionalizes synthetic hydrographs, thereby interpreting the uncertainty bounds of streamflow variables. Multiscale evidences from Potiribu NCE layout show novel pathways of uncertainty prediction under a PUB perspective in representative basins of world biomes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21257193','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21257193"><span>Quantitative effects of composting state variables on C/N ratio through GA-aided multivariate analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Wei; Huang, Guo H; Zeng, Guangming; Qin, Xiaosheng; Yu, Hui</p> <p>2011-03-01</p> <p>It is widely known that variation of the C/N ratio is dependent on many state variables during composting processes. This study attempted to develop a genetic algorithm aided stepwise cluster analysis (GASCA) method to describe the nonlinear relationships between the selected state variables and the C/N ratio in food waste composting. The experimental data from six bench-scale composting reactors were used to demonstrate the applicability of GASCA. Within the GASCA framework, GA searched optimal sets of both specified state variables and SCA's internal parameters; SCA established statistical nonlinear relationships between state variables and the C/N ratio; to avoid unnecessary and time-consuming calculation, a proxy table was introduced to save around 70% computational efforts. The obtained GASCA cluster trees had smaller sizes and higher prediction accuracy than the conventional SCA trees. Based on the optimal GASCA tree, the effects of the GA-selected state variables on the C/N ratio were ranged in a descending order as: NH₄+-N concentration>Moisture content>Ash Content>Mean Temperature>Mesophilic bacteria biomass. Such a rank implied that the variation of ammonium nitrogen concentration, the associated temperature and the moisture conditions, the total loss of both organic matters and available mineral constituents, and the mesophilic bacteria activity, were critical factors affecting the C/N ratio during the investigated food waste composting. This first application of GASCA to composting modelling indicated that more direct search algorithms could be coupled with SCA or other multivariate analysis methods to analyze complicated relationships during composting and many other environmental processes. Copyright © 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/43322','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/43322"><span>Incorporating additional tree and environmental variables in a lodgepole pine stem profile model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>John C. Byrne</p> <p>1993-01-01</p> <p>A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22420390-risk-prediction-models-locoregional-failure-after-radical-cystectomy-urothelial-carcinoma-external-validation-cohort-korean-patients','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22420390-risk-prediction-models-locoregional-failure-after-radical-cystectomy-urothelial-carcinoma-external-validation-cohort-korean-patients"><span>Risk Prediction Models of Locoregional Failure After Radical Cystectomy for Urothelial Carcinoma: External Validation in a Cohort of Korean Patients</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ku, Ja Hyeon; Kim, Myong; Jeong, Chang Wook</p> <p>2014-08-01</p> <p>Purpose: To evaluate the predictive accuracy and general applicability of the locoregional failure model in a different cohort of patients treated with radical cystectomy. Methods and Materials: A total of 398 patients were included in the analysis. Death and isolated distant metastasis were considered competing events, and patients without any events were censored at the time of last follow-up. The model included the 3 variables pT classification, the number of lymph nodes identified, and margin status, as follows: low risk (≤pT2), intermediate risk (≥pT3 with ≥10 nodes removed and negative margins), and high risk (≥pT3 with <10 nodes removed ormore » positive margins). Results: The bootstrap-corrected concordance index of the model 5 years after radical cystectomy was 66.2%. When the risk stratification was applied to the validation cohort, the 5-year locoregional failure estimates were 8.3%, 21.2%, and 46.3% for the low-risk, intermediate-risk, and high-risk groups, respectively. The risk of locoregional failure differed significantly between the low-risk and intermediate-risk groups (subhazard ratio [SHR], 2.63; 95% confidence interval [CI], 1.35-5.11; P<.001) and between the low-risk and high-risk groups (SHR, 4.28; 95% CI, 2.17-8.45; P<.001). Although decision curves were appropriately affected by the incidence of the competing risk, decisions about the value of the models are not likely to be affected because the model remains of value over a wide range of threshold probabilities. Conclusions: The model is not completely accurate, but it demonstrates a modest level of discrimination, adequate calibration, and meaningful net benefit gain for prediction of locoregional failure after radical cystectomy.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3306191','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3306191"><span>Diagnostic accuracy of the aspartate aminotransferase-to-platelet ratio index for the prediction of hepatitis B-related fibrosis: a leading meta-analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background The aspartate aminotransferase-to-platelet ratio index (APRI), a tool with limited expense and widespread availability, is a promising noninvasive alternative to liver biopsy for detecting hepatic fibrosis. The objective of this study was to systematically review the performance of the APRI in predicting significant fibrosis and cirrhosis in hepatitis B-related fibrosis. Methods Areas under summary receiver operating characteristic curves (AUROC), sensitivity and specificity were used to examine the accuracy of the APRI for the diagnosis of hepatitis B-related significant fibrosis and cirrhosis. Heterogeneity was explored using meta-regression. Results Nine studies were included in this meta-analysis (n = 1,798). Prevalence of significant fibrosis and cirrhosis were 53.1% and 13.5%, respectively. The summary AUCs of the APRI for significant fibrosis and cirrhosis were 0.79 and 0.75, respectively. For significant fibrosis, an APRI threshold of 0.5 was 84% sensitive and 41% specific. At the cutoff of 1.5, the summary sensitivity and specificity were 49% and 84%, respectively. For cirrhosis, an APRI threshold of 1.0-1.5 was 54% sensitive and 78% specific. At the cutoff of 2.0, the summary sensitivity and specificity were 28% and 87%, respectively. Meta-regression analysis indicated that the APRI accuracy for both significant fibrosis and cirrhosis was affected by histological classification systems, but not influenced by the interval between Biopsy & APRI or blind biopsy. Conclusion Our meta-analysis suggests that APRI show limited value in identifying hepatitis B-related significant fibrosis and cirrhosis. PMID:22333407</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27103020','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27103020"><span>Repeatability of Quantitative Whole-Body 18F-FDG PET/CT Uptake Measures as Function of Uptake Interval and Lesion Selection in Non-Small Cell Lung Cancer Patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kramer, Gerbrand Maria; Frings, Virginie; Hoetjes, Nikie; Hoekstra, Otto S; Smit, Egbert F; de Langen, Adrianus Johannes; Boellaard, Ronald</p> <p>2016-09-01</p> <p>Change in (18)F-FDG uptake may predict response to anticancer treatment. The PERCIST suggest a threshold of 30% change in SUV to define partial response and progressive disease. Evidence underlying these thresholds consists of mixed stand-alone PET and PET/CT data with variable uptake intervals and no consensus on the number of lesions to be assessed. Additionally, there is increasing interest in alternative (18)F-FDG uptake measures such as metabolically active tumor volume and total lesion glycolysis (TLG). The aim of this study was to comprehensively investigate the repeatability of various quantitative whole-body (18)F-FDG metrics in non-small cell lung cancer (NSCLC) patients as a function of tracer uptake interval and lesion selection strategies. Eleven NSCLC patients, with at least 1 intrathoracic lesion 3 cm or greater, underwent double baseline whole-body (18)F-FDG PET/CT scans at 60 and 90 min after injection within 3 d. All (18)F-FDG-avid tumors were delineated with an 50% threshold of SUVpeak adapted for local background. SUVmax, SUVmean, SUVpeak, TLG, metabolically active tumor volume, and tumor-to-blood and -liver ratios were evaluated, as well as the influence of lesion selection and 2 methods for correction of uptake time differences. The best repeatability was found using the SUV metrics of the averaged PERCIST target lesions (repeatability coefficients < 10%). The correlation between test and retest scans was strong for all uptake measures at either uptake interval (intraclass correlation coefficient > 0.97 and R(2) > 0.98). There were no significant differences in repeatability between data obtained 60 and 90 min after injection. When only PERCIST-defined target lesions were included (n = 34), repeatability improved for all uptake values. Normalization to liver or blood uptake or glucose correction did not improve repeatability. However, after correction for uptake time the correlation of SUV measures and TLG between the 60- and 90-min data significantly improved without affecting test-retest performance. This study suggests that a 15% change of SUVmean/SUVpeak at 60 min after injection can be used to assess response in advanced NSCLC patients if up to 5 PERCIST target lesions are assessed. Lower thresholds could be used in averaged PERCIST target lesions (<10%). © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28739382','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28739382"><span>Developmental trends in infant temporal processing speed.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saint, Sarah E; Hammond, Billy R; O'Brien, Kevin J; Frick, Janet E</p> <p>2017-09-01</p> <p>Processing speed, which can be measured behaviorally in various sensory domains, has been shown to be a strong marker of central nervous system health and functioning in adults. Visual temporal processing speed (measured via critical flicker fusion [CFF] thresholds) represents the maximum speed at which the visual system can detect changes. Previous studies of infant CFF development have been limited and inconsistent. The present study sought to characterize the development of CFF thresholds in the first year of life using a larger sample than previous studies and a repeated measures design (in Experiment 2) to control for individual differences. Experiment 1 (n=44 infants and n=24 adults) used a cross-sectional design aimed at examining age-related changes that exist in CFF thresholds across infants during the first year of life. Adult data were collected to give context to infant CFF thresholds obtained under our specific stimulus conditions. Experiment 2 (N=28) used a repeated-measures design to characterize the developmental trajectory of infant CFF thresholds between three and six months of age, based on the results of Experiment 1. Our results reveal a general increase in CFF from three to four and one-half months of age, with a high degree of variability within each age group. Infant CFF thresholds at 4.5months of age were not significantly different from the adult average, though a regression analysis of the data from Experiment 2 predicted that infants would reach the adult average closer to 6months of age. Developmental and clinical implications of these data are discussed. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMNH41B1810A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMNH41B1810A"><span>Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.</p> <p>2015-12-01</p> <p>Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12660108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12660108"><span>Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre</p> <p>2003-03-01</p> <p>A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.7188W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.7188W"><span>Effects of ice storm on forest ecosystem of southern China in 2008 Shaoqiang Wang1, Lei Zhou1, Weimin Ju2, Kun Huang1 1Key Lab of Ecosystem Network Observation and Modeling, Institute of Geographical Sciences and Natural Resources Research, Beijing, 10010</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Shaoqiang</p> <p>2014-05-01</p> <p>Evidence is mounting that an increase in extreme climate events has begun to occur worldwide during the recent decades, which affect biosphere function and biodiversity. Ecosystems returned to its original structures and functions to maintain its sustainability, which was closely dependent on ecosystem resilience. Understanding the resilience and recovery capacity of ecosystem to extreme climate events is essential to predicting future ecosystem responses to climate change. Given the overwhelming importance of this region in the overall carbon cycle of forest ecosystems in China, south China suffered a destructive ice storm in 2008. In this study, we used the number of freezing day and a process-based model (Boreal Ecosystem Productivity Simulator, BEPS) to characterize the spatial distribution of ice storm region in southeastern China and explore the impacts on carbon cycle of forest ecosystem over the past decade. The ecosystem variables, i.e. Net primary productivity (NPP), Evapotranspiration (ET), and Water use efficiency (WUE, the ratio of NPP to ET) from the outputs of BEPS models were used to detect the resistance and resilience of forest ecosystem in southern China. The pattern of ice storm-induced forest productivity widespread decline was closely related to the number of freezing day during the ice storm period. The NPP of forest area suffered heavy ice storm returned to normal status after five months with high temperature and ample moisture, indicated a high resilience of subtropical forest in China. The long-term changes of forest WUE remain stable, behaving an inherent sensitivity of ecosystem to extreme climate events. In addition, ground visits suggested that the recovery of forest productivity was attributed to rapid growth of understory. Understanding the variability and recovery threshold of ecosystem following extreme climate events help us to better simulate and predict the variability of ecosystem structure and function under current and future climate change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29123714','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29123714"><span>Model for predicting the injury severity score.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi</p> <p>2015-07-01</p> <p>To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  < 0.05. To select objective variables, the stepwise method was used. A total of 122 patients were included in this study. The formula for predicting the injury severity score (ISS) was as follows: ISS = 13.252-0.078(mean blood pressure) + 0.12(fibrin degradation products). The P -value of this formula from analysis of variance was <0.001, and the multiple correlation coefficient (R) was 0.739 (R 2  = 0.546). The multiple correlation coefficient adjusted for the degrees of freedom was 0.538. The Durbin-Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21508618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21508618"><span>Investigative clinical study on prostate cancer part IV: exploring functional relationships of total testosterone predicting free testosterone and total prostate-specific antigen in operated prostate cancer patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Porcaro, Antonio B; Petrozziello, Aldo; Migliorini, Filippo; Lacola, Vincenzo; Romano, Mario; Sava, Teodoro; Ghimenton, Claudio; Caruso, Beatrice; Zecchini Antoniolli, Stefano; Rubilotta, Emanuele; Monaco, Carmelo; Comunale, Luigi</p> <p>2011-01-01</p> <p>To explore, in operated prostate cancer patients, functional relationships of total testosterone (tt) predicting free testosterone (ft) and total PSA. 128 operated prostate cancer patients were simultaneously investigated for tt, ft and PSA before surgery. Patients were not receiving 5α-reductase inhibitors, LH-releasing hormone analogues and testosterone replacement treatment. Scatter plots including ft and PSA versus tt were computed in order to assess the functional relationship of the variables. Linear regression analysis of tt predicting ft and PSA was computed. tt was a significant predictor of the response variable (ft) and different subsets of the patient population were assessed according to the ft to tt ratio. PSA was related to tt according to a nonlinear law. tt was a significant predictor of PSA according to an inversely nonlinear law and different significant clusters of the patient population were assessed according to the different constant of proportionality computed from experimental data. In our prostate cancer population, ft was significantly predicted by tt according to a linear law, and the ft/tt ratio was a significant parameter for assessing the different clusters. Also, tt was a significant variable predicting PSA by a nonlinear law and different clusters of the patient population were assessed by the different constants of proportionality. As a theory, we explain the nonlinear relation of tt in predicting PSA as follows: (a) the number of androgen-independent prostate cancer cells increases as tumor volume and PSA serum levels rise, (b) the prevalence of androgen-independent cells producing a substance which inhibits serum LH, and (c) as a result lower levels of serum tt are detected. Copyright © 2011 S. Karger AG, Basel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3690065','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3690065"><span>Multivariate analyses of tinnitus complaint and change in tinnitus complaint: a masker study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jakes, S; Stephens, S D</p> <p>1987-11-01</p> <p>Multivariate statistical techniques were used to re-analyse the data from the recent DHSS multi-centre masker study. These analyses were undertaken to three ends. First, to clarify and attempt to replicate the previously found factor structure of complaints about tinnitus. Secondly, to attempt to identify common factors in the change or improvement measures pre- and post-masker treatment. Thirdly, to identify predictors of any such outcome factors. Two complaint factors were identified; 'Distress' and 'intrusiveness'. A series of analyses were conducted on change measures using different numbers of subjects and variables. When only semantic differential scales were used, the change factors were very similar to the complaint factors noted above. When variables measuring other aspects of improvement were included, several other factors were identified. These included; 'tinnitus helped', 'masking effects', 'residual inhibition' and 'matched loudness'. Twenty-five conceptually distinct predictors of outcome were identified. These predictor variables were quite different for different outcome factors. For example, high-frequency hearing loss was a predictor of tinnitus being helped by the masker, and a low frequency match and a low masking threshold predicted therapeutic success on residual inhibition. Decrease in matched loudness was predicted by louder tinnitus initially.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=337294&keyword=chemical%20safety&subject=chemical%20safety%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=08/22/2012&dateendpublishedpresented=08/22/2017&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=337294&keyword=chemical%20safety&subject=chemical%20safety%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=08/22/2012&dateendpublishedpresented=08/22/2017&sortby=pubdateyear"><span>A Random Forest Approach to Predict the Spatial Distribution ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated wi</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.6626B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.6626B"><span>Towards developing drought impact functions to advance drought monitoring and early warning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bachmair, Sophie; Stahl, Kerstin; Hannaford, Jamie; Svoboda, Mark</p> <p>2015-04-01</p> <p>In natural hazard analysis, damage functions (also referred to as vulnerability or susceptibility functions) relate hazard intensity to the negative effects of the hazard event, often expressed as damage ratio or monetary loss. While damage functions for floods and seismic hazards have gained considerable attention, there is little knowledge on how drought intensity translates into ecological and socioeconomic impacts. One reason for this is the multifaceted nature of drought affecting different domains of the hydrological cycle and different sectors of human activity (for example, recognizing meteorological - agricultural - hydrological - socioeconomic drought) leading to a wide range of drought impacts. Moreover, drought impacts are often non-structural and hard to quantify or monetarize (e.g. impaired navigability of streams, bans on domestic water use, increased mortality of aquatic species). Knowledge on the relationship between drought intensity and drought impacts, i.e. negative environmental, economic or social effects experienced under drought conditions, however, is vital to identify critical thresholds for drought impact occurrence. Such information may help to improve drought monitoring and early warning (M&EW), one goal of the international DrIVER project (Drought Impacts: Vulnerability thresholds in monitoring and Early-warning Research). The aim of this study is to test the feasibility of designing "drought impact functions" for case study areas in Europe (Germany and UK) and the United States to derive thresholds meaningful for drought impact occurrence; to account for the multidimensionality of drought impacts, we use the broader term "drought impact function" over "damage function". First steps towards developing empirical drought impact functions are (1) to identify meaningful indicators characterizing the hazard intensity (e.g. indicators expressing a precipitation or streamflow deficit), (2) to identify suitable variables representing impacts, damage, or loss due to drought, and (3) to test different statistical models to link drought intensity with drought impact information to derive meaningful thresholds. While the focus regarding drought impact variables lies on text-based impact reports from the European Drought Impact report Inventory (EDII) and the US Drought Impact Reporter (DIR), the information gain through exploiting other variables such as agricultural yield statistics and remotely sensed vegetation indices is explored. First results reveal interesting insights into the complex relationship between drought indicators and impacts and highlight differences among drought impact variables and geographies. Although a simple intensity threshold evoking specific drought impacts cannot be identified, developing drought impact functions helps to elucidate how drought conditions relate to ecological or socioeconomic impacts. Such knowledge may provide guidance for inferring meaningful triggers for drought M&EW and could have potential for a wide range of drought management applications (for example, building drought scenarios for testing the resilience of drought plans or water supply systems).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989JCli....2...48A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989JCli....2...48A"><span>Prediction of County-Level Corn Yields Using an Energy-Crop Growth Index.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Andresen, Jeffrey A.; Dale, Robert F.; Fletcher, Jerald J.; Preckel, Paul V.</p> <p>1989-01-01</p> <p>Weather conditions significantly affect corn yields. while weather remains as the major uncontrolled variable in crop production, an understanding of the influence of weather on yields can aid in early and accurate assessment of the impact of weather and climate on crop yields and allow for timely agricultural extension advisories to help reduce farm management costs and improve marketing, decisions. Based on data for four representative countries in Indiana from 1960 to 1984 (excluding 1970 because of the disastrous southern corn leaf blight), a model was developed to estimate corn (Zea mays L.) yields as a function of several composite soil-crop-weather variables and a technology-trend marker, applied nitrogen fertilizer (N). The model was tested by predicting corn yields for 15 other counties. A daily energy-crop growth (ECG) variable in which different weights were used for the three crop-weather variables which make up the daily ECG-solar radiation intercepted by the canopy, a temperature function, and the ratio of actual to potential evapotranspiration-performed better than when the ECG components were weighted equally. The summation of the weighted daily ECG over a relatively short period (36 days spanning silk) was found to provide the best index for predicting county average corn yield. Numerical estimation results indicate that the ratio of actual to potential evapotranspiration (ET/PET) is much more important than the other two ECG factors in estimating county average corn yield in Indiana.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>