Angeli, Timothy R; O'Grady, Gregory; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; Du, Peng; Pullan, Andrew J; Bissett, Ian P
2013-01-01
Background/Aims Small intestine motility is governed by an electrical slow wave activity, and abnormal slow wave events have been associated with intestinal dysmotility. High-resolution (HR) techniques are necessary to analyze slow wave propagation, but progress has been limited by few available electrode options and laborious manual analysis. This study presents novel methods for in vivo HR mapping of small intestine slow wave activity. Methods Recordings were obtained from along the porcine small intestine using flexible printed circuit board arrays (256 electrodes; 4 mm spacing). Filtering options were compared, and analysis was automated through adaptations of the falling-edge variable-threshold (FEVT) algorithm and graphical visualization tools. Results A Savitzky-Golay filter was chosen with polynomial-order 9 and window size 1.7 seconds, which maintained 94% of slow wave amplitude, 57% of gradient and achieved a noise correction ratio of 0.083. Optimized FEVT parameters achieved 87% sensitivity and 90% positive-predictive value. Automated activation mapping and animation successfully revealed slow wave propagation patterns, and frequency, velocity, and amplitude were calculated and compared at 5 locations along the intestine (16.4 ± 0.3 cpm, 13.4 ± 1.7 mm/sec, and 43 ± 6 µV, respectively, in the proximal jejunum). Conclusions The methods developed and validated here will greatly assist small intestine HR mapping, and will enable experimental and translational work to evaluate small intestine motility in health and disease. PMID:23667749
Variable-Threshold Threshold Elements,
A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the
Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo
Fontaine, Bertrand; Peña, José Luis; Brette, Romain
2014-01-01
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397
Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C
2002-08-01
To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.
Müller, Dirk; Pulm, Jannis; Gandjour, Afschin
2012-01-01
To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Pavlaković, G; Züchner, K; Zapf, A; Bachmann, C G; Graf, B M; Crozier, T A; Pavlaković, H
2009-08-01
Various factors can influence thermal perception threshold measurements and contribute significantly to unwanted variability of the tests. To minimize this variability, testing should be performed under strictly controlled conditions. Identifying the factors that increase the variability and eliminating their influence should increase reliability and reproducibility. Currently available thermotesting devices use a water-cooling system that generates a continuous noise of approximately 60 dB. In order to analyze whether this noise could influence the thermal threshold measurements we compared the thresholds obtained with a silent thermotesting device to those obtained with a commercially available device. The subjects were tested with one randomly chosen device on 1 day and with the other device 7 days later. At each session, heat, heat pain, cold, and cold pain thresholds were determined with three measurements. Bland-Altman analysis was used to assess agreement in measurements obtained with different devices and it was shown that the intersubject variability of the thresholds obtained with the two devices was comparable for all four thresholds tested. In contrast, the intrasubject variability of the thresholds for heat, heat pain, and cold pain detection was significantly lower with the silent device. Our results show that thermal sensory thresholds measured with the two devices are comparable. However, our data suggest that, for studies with repeated measurements on the same subjects, a silent thermotesting device may allow detection of smaller differences in the treatment effects and/or may permit the use of a smaller number of tested subjects. Muscle Nerve 40: 257-263, 2009.
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc
2016-04-01
In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.
Frank, T
2001-04-01
The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.
Bierer, Julie Arenberg
2007-03-01
The efficacy of cochlear implants is limited by spatial and temporal interactions among channels. This study explores the spatially restricted tripolar electrode configuration and compares it to bipolar and monopolar stimulation. Measures of threshold and channel interaction were obtained from nine subjects implanted with the Clarion HiFocus-I electrode array. Stimuli were biphasic pulses delivered at 1020 pulses/s. Threshold increased from monopolar to bipolar to tripolar stimulation and was most variable across channels with the tripolar configuration. Channel interaction, quantified by the shift in threshold between single- and two-channel stimulation, occurred for all three configurations but was largest for the monopolar and simultaneous conditions. The threshold shifts with simultaneous tripolar stimulation were slightly smaller than with bipolar and were not as strongly affected by the timing of the two channel stimulation as was monopolar. The subjects' performances on clinical speech tests were correlated with channel-to-channel variability in tripolar threshold, such that greater variability was related to poorer performance. The data suggest that tripolar channels with high thresholds may reveal cochlear regions of low neuron survival or poor electrode placement.
Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics
Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong
2016-01-01
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the ‘general separatrix’ in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability. PMID:27546614
Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics.
Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong
2016-08-22
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the 'general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.
Variability of argon laser-induced sensory and pain thresholds on human oral mucosa and skin.
Svensson, P.; Bjerring, P.; Arendt-Nielsen, L.; Kaaber, S.
1991-01-01
The variability of laser-induced pain perception on human oral mucosa and hairy skin was investigated in order to establish a new method for evaluation of pain in the orofacial region. A high-energy argon laser was used for experimental pain stimulation, and sensory and pain thresholds were determined. The intra-individual coefficients of variation for oral thresholds were comparable to cutaneous thresholds. However, inter-individual variation was smaller for oral thresholds, which could be due to larger variation in cutaneous optical properties. The short-term and 24-hr changes in thresholds on both surfaces were less than 9%. The results indicate that habituation to laser thresholds may account for part of the intra-individual variation observed. However, the subjective ratings of the intensity of the laser stimuli were constant. Thus, oral thresholds may, like cutaneous thresholds, be used for assessment and quantification of analgesic efficacies and to investigate various pain conditions. PMID:1814248
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Surface ablation of aluminum and silicon by ultrashort laser pulses of variable width
NASA Astrophysics Data System (ADS)
Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Makarov, S. V.; Kuchmizhak, A. A.; Vitrik, O. B.; Kulchin, Yu. N.
2016-06-01
Single-shot thresholds of surface ablation of aluminum and silicon via spallative ablation by infrared (IR) and visible ultrashort laser pulses of variable width τlas (0.2-12 ps) have been measured by optical microscopy. For increasing laser pulse width τlas < 3 ps, a drastic (threefold) drop of the ablation threshold of aluminum has been observed for visible pulses compared to an almost negligible threshold variation for IR pulses. In contrast, the ablation threshold in silicon increases threefold with increasing τlas for IR pulses, while the corresponding thresholds for visible pulses remained almost constant. In aluminum, such a width-dependent decrease in ablation thresholds has been related to strongly diminished temperature gradients for pulse widths exceeding the characteristic electron-phonon thermalization time. In silicon, the observed increase in ablation thresholds has been ascribed to two-photon IR excitation, while in the visible range linear absorption of the material results in almost constant thresholds.
Identifying a Probabilistic Boolean Threshold Network From Samples.
Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya
2018-04-01
This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.
The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)
O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey
2010-12-01
Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.
Identifying community thresholds for lotic benthic diatoms in response to human disturbance.
Tang, Tao; Tang, Ting; Tan, Lu; Gu, Yuan; Jiang, Wanxiang; Cai, Qinghua
2017-06-23
Although human disturbance indirectly influences lotic assemblages through modifying physical and chemical conditions, identifying thresholds of human disturbance would provide direct evidence for preventing anthropogenic degradation of biological conditions. In the present study, we used data obtained from tributaries of the Three Gorges Reservoir in China to detect effects of human disturbance on streams and to identify disturbance thresholds for benthic diatoms. Diatom species composition was significantly affected by three in-stream stressors including TP, TN and pH. Diatoms were also influenced by watershed % farmland and natural environmental variables. Considering three in-stream stressors, TP was positively influenced by % farmland and % impervious surface area (ISA). In contrast, TN and pH were principally affected by natural environmental variables. Among measured natural environmental variables, average annual air temperature, average annual precipitation, and topsoil % CaCO 3 , % gravel, and total exchangeable bases had significant effects on study streams. When effects of natural variables were accounted for, substantial compositional changes in diatoms occurred when farmland or ISA land use exceeded 25% or 0.3%, respectively. Our study demonstrated the rationale for identifying thresholds of human disturbance for lotic assemblages and addressed the importance of accounting for effects of natural factors for accurate disturbance thresholds.
Discharge variability and bedrock river incision on the Hawaiian island of Kaua'i
NASA Astrophysics Data System (ADS)
Huppert, K.; Deal, E.; Perron, J. T.; Ferrier, K.; Braun, J.
2017-12-01
Bedrock river incision occurs during floods that generate sufficient shear stress to strip riverbeds of sediment cover and erode underlying bedrock. Thresholds for incision can prevent erosion at low flows and slow down erosion at higher flows that do generate excess shear stress. Because discharge distributions typically display power-law tails, with non-negligible frequencies of floods much greater than the mean, models incorporating stochastic discharge and incision thresholds predict that discharge variability can sometimes have greater effects on long-term incision rates than mean discharge. This occurs when the commonly observed inverse scalings between mean discharge and discharge variability are weak or when incision thresholds are high. Because the effects of thresholds and discharge variability have only been documented in a few locations, their influence on long-term river incision rates remains uncertain. The Hawaiian island of Kaua'i provides an ideal natural laboratory to evaluate the effects of discharge variability and thresholds on bedrock river incision because it has one of Earth's steepest spatial gradients in mean annual rainfall and it also experiences dramatic spatial variations in rainfall and discharge variability, spanning a wide range of the conditions reported on Earth. Kaua'i otherwise has minimal variations in lithology, vertical motion, and other factors that can influence erosion. River incision rates averaged over 1.5 - 4.5 Myr timescales can be estimated along the lengths of Kauaian channels from the depths of river canyons and lava flow ages. We characterize rainfall and discharge variability on Kaua'i using records from an extensive network of rain and stream gauges spanning the past century. We use these characterizations to model long-term bedrock river incision along Kauaian channels with a threshold-dependent incision law, modulated by site-specific discharge-channel width scalings. Our comparisons between modeled and observed erosion rates suggest that variations in river incision rates on Kaua'i are dominated by variations in mean rainfall and discharge, rather than by differences in storminess across the island. We explore the implications of this result for the threshold dependence of river incision across Earth's varied climates.
Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.
Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq
2017-06-01
The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.
Calculating the dim light melatonin onset: the impact of threshold and sampling rate.
Molina, Thomas A; Burgess, Helen J
2011-10-01
The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
Dealing with Unknown Variables in Policy/Program Evaluation.
ERIC Educational Resources Information Center
Nagel, Stuart S.
1983-01-01
Threshold analysis (TA) is introduced as an evaluation model. TA converts unknown variables into questions as to whether a given benefit, cost, or success probability is more or less than a threshold, above which the proposed project would be profitable, and below which it would be unprofitable. (Author/PN)
Yu, Yuguo; Shu, Yousheng; McCormick, David A.
2008-01-01
Neocortical action potential responses in vivo are characterized by considerable threshold variability, and thus timing and rate variability, even under seemingly identical conditions. This finding suggests that cortical ensembles are required for accurate sensorimotor integration and processing. Intracellularly, trial-to-trial variability results not only from variation in synaptic activities, but also in the transformation of these into patterns of action potentials. Through simultaneous axonal and somatic recordings and computational simulations, we demonstrate that the initiation of action potentials in the axon initial segment followed by backpropagation of these spikes throughout the neuron results in a distortion of the relationship between the timing of synaptic and action potential events. In addition, this backpropagation also results in an unusually high rate of rise of membrane potential at the foot of the action potential. The distortion of the relationship between the amplitude time course of synaptic inputs and action potential output caused by spike back-propagation results in the appearance of high spike threshold variability at the level of the soma. At the point of spike initiation, the axon initial segment, threshold variability is considerably less. Our results indicate that spike generation in cortical neurons is largely as expected by Hodgkin—Huxley theory and is more precise than previously thought. PMID:18632930
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Impact of rainfall spatial variability on Flash Flood Forecasting
NASA Astrophysics Data System (ADS)
Douinot, Audrey; Roux, Hélène; Garambois, Pierre-André; Larnier, Kevin
2014-05-01
According to the United States National Hazard Statistics database, flooding and flash flooding have caused the largest number of deaths of any weather-related phenomenon over the last 30 years (Flash Flood Guidance Improvement Team, 2003). Like the storms that cause them, flash floods are very variable and non-linear phenomena in time and space, with the result that understanding and anticipating flash flood genesis is far from straightforward. In the U.S., the Flash Flood Guidance (FFG) estimates the average number of inches of rainfall for given durations required to produce flash flooding in the indicated county. In Europe, flash flood often occurred on small catchments (approximately 100 km2) and it has been shown that the spatial variability of rainfall has a great impact on the catchment response (Le Lay and Saulnier, 2007). Therefore, in this study, based on the Flash flood Guidance method, rainfall spatial variability information is introduced in the threshold estimation. As for FFG, the threshold is the number of millimeters of rainfall required to produce a discharge higher than the discharge corresponding to the first level (yellow) warning of the French flood warning service (SCHAPI: Service Central d'Hydrométéorologie et d'Appui à la Prévision des Inondations). The indexes δ1 and δ2 of Zoccatelli et al. (2010), based on the spatial moments of catchment rainfall, are used to characterize the rainfall spatial distribution. Rainfall spatial variability impacts on warning threshold and on hydrological processes are then studied. The spatially distributed hydrological model MARINE (Roux et al., 2011), dedicated to flash flood prediction is forced with synthetic rainfall patterns of different spatial distributions. This allows the determination of a warning threshold diagram: knowing the spatial distribution of the rainfall forecast and therefore the 2 indexes δ1 and δ2, the threshold value is read on the diagram. A warning threshold diagram is built for each studied catchment. The proposed methodology is applied on three Mediterranean catchments often submitted to flash floods. The new forecasting method as well as the Flash Flood Guidance method (uniform rainfall threshold) are tested on 25 flash floods events that had occurred on those catchments. Results show a significant impact of rainfall spatial variability. Indeed, it appears that the uniform rainfall threshold (FFG threshold) always overestimates the observed rainfall threshold. The difference between the FFG threshold and the proposed threshold ranges from 8% to 30%. The proposed methodology allows the calculation of a threshold more representative of the observed one. However, results strongly depend on the related event duration and on the catchment properties. For instance, the impact of the rainfall spatial variability seems to be correlated with the catchment size. According to these results, it seems to be interesting to introduce information on the catchment properties in the threshold calculation. Flash Flood Guidance Improvement Team, 2003. River Forecast Center (RFC) Development Management Team. Final Report. Office of Hydrologic Development (OHD), Silver Spring, Mary-land. Le Lay, M. and Saulnier, G.-M., 2007. Exploring the signature of climate and landscape spatial variabilities in flash flood events: Case of the 8-9 September 2002 Cévennes-Vivarais catastrophic event. Geophysical Research Letters, 34(L13401), doi:10.1029/2007GL029746. Roux, H., Labat, D., Garambois, P.-A., Maubourguet, M.-M., Chorda, J. and Dartus, D., 2011. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments. Nat. Hazards Earth Syst. Sci. J1 - NHESS, 11(9), 2567-2582. Zoccatelli, D., Borga, M., Zanon, F., Antonescu, B. and Stancalie, G., 2010. Which rainfall spatial information for flash flood response modelling? A numerical investigation based on data from the Carpathian range, Romania. Journal of Hydrology, 394(1-2), 148-161.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G
2012-09-01
This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-04-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-01-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
A factorization approach to next-to-leading-power threshold logarithms
NASA Astrophysics Data System (ADS)
Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.
2015-06-01
Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.
Introducing hydrological information in rainfall intensity-duration thresholds
NASA Astrophysics Data System (ADS)
Greco, Roberto; Bogaard, Thom
2016-04-01
Regional landslide hazard assessment is mainly based on empirically derived precipitation-intensity-duration (PID) thresholds. Generally, two features of rainfall events are plotted to discriminate between observed occurrence and absence of occurrence of mass movements. Hereafter, a separation line is drawn in logarithmic space. Although successfully applied in many case studies, such PID thresholds suffer from many false positives as well as limited physical process insight. One of the main limitations is indeed that they do not include any information about the hydrological processes occurring along the slopes, so that the triggering is only related to rainfall characteristics. In order to introduce such an hydrological information in the definition of rainfall thresholds for shallow landslide triggering assessment, in this study the introduction of non-dimensional rainfall characteristics is proposed. In particular, rain storm depth, intensity and duration are divided by a characteristic infiltration depth, a characteristic infiltration rate and a characteristic duration, respectively. These latter variables depend on the hydraulic properties and on the moisture state of the soil cover at the beginning of the precipitation. The proposed variables are applied to the case of a slope covered with shallow pyroclastic deposits in Cervinara (southern Italy), for which experimental data of hourly rainfall and soil suction were available. Rainfall thresholds defined with the proposed non-dimensional variables perform significantly better than those defined with dimensional variables, either in the intensity-duration plane or in the depth-duration plane.
Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M
2009-11-01
The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.
Using Multiple Metrics to Analyze Trends and Sensitivity of Climate Variability in New York City
NASA Astrophysics Data System (ADS)
Huang, J.; Towey, K.; Booth, J. F.; Baez, S. D.
2017-12-01
As the overall temperature of Earth continues to warm, changes in the Earth's climate are being observed through extreme weather events, such as heavy precipitation events and heat waves. This study examines the daily precipitation and temperature record of the greater New York City region during the 1979-2014 period. Daily station observations from three greater New York City airports: John F. Kennedy (JFK), LaGuardia (LGA) and Newark (EWR), are used in this study. Multiple statistical metrics are used in this study to analyze trends and variability in temperature and precipitation in the greater New York City region. The temperature climatology reveals a distinct seasonal cycle, while the precipitation climatology exhibits greater annual variability. Two types of thresholds are used to examine the variability of extreme events: extreme threshold and daily anomaly threshold. The extreme threshold indicates how the strength of the overall maximum is changing whereas the daily anomaly threshold indicates if the strength of the daily maximum is changing over time. We observed an increase in the frequency of anomalous daily precipitation events over the last 36 years, with the greatest frequency occurring in 2011. The most extreme precipitation events occur during the months of late summer through early fall, with approximately four expected extreme events occurring per year during the summer and fall. For temperature, the greatest frequency and variation in temperature anomalies occur during winter and spring. In addition, temperature variance is also analyzed to determine if there is greater day-to-day temperature variability today than in the past.
Motor Unit Interpulse Intervals During High Force Contractions.
Stock, Matt S; Thompson, Brennan J
2016-01-01
We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.
Donner, Simon D
2011-07-01
Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.
A longitudinal study on the ammonia threshold in junior cyclists
Yuan, Y; Chan, K
2004-01-01
Objectives: To identify the effect of a one year non-specific training programme on the ammonia threshold of a group of junior cyclists and to correlate ammonia threshold with other common physiological variables. Methods: The cyclists performed tests at three time points (T1, T2, T3) during the year. Follow up tests were conducted every six months after the original test. Ammonia threshold was obtained from a graded exercise with four minute steps. Results: The relatively non-specific one year training programme was effective in inducing an increase in peak VO2 (60.6 (5.9), 65.9 (7.4), and 64.6 (6.5) ml/min/kg at T1, T2, and T3 respectively) and endurance time (18.3 (4.5), 20.1 (5.2), and 27.0 (6.1) minutes at T1, T2, and T3 respectively), but was not effective for the sprint related variables. Ammonia threshold, together with lactate threshold and ventilatory threshold, was not significantly different at the three test times. Only endurance time correlated significantly with ammonia threshold (r = 0.915, p = 0.001). Conclusions: The findings suggest that a relatively non-specific one year training programme does not modify the ammonia threshold of junior cyclists. The significant correlation between ammonia threshold and endurance time further confirms that ammonia threshold is a measure of the ability to sustain exercise at submaximal intensities. PMID:15039242
Massof, Robert W
2014-10-01
A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Symstad, Amy J.; Jonas, Jayne L.; Edited by Guntenspergen, Glenn R.
2014-01-01
Natural range of variation (NRV) may be used to establish decision thresholds or action assessment points when ecological thresholds are either unknown or do not exist for attributes of interest in a managed ecosystem. The process for estimating NRV involves identifying spatial and temporal scales that adequately capture the heterogeneity of the ecosystem; compiling data for the attributes of interest via study of historic records, analysis and interpretation of proxy records, modeling, space-for-time substitutions, or analysis of long-term monitoring data; and quantifying the NRV from those data. At least 19 National Park Service (NPS) units in North America’s Great Plains are monitoring plant species richness and evenness as indicators of vegetation integrity in native grasslands, but little information on natural, temporal variability of these indicators is available. In this case study, we use six long-term vegetation monitoring datasets to quantify the temporal variability of these attributes in reference conditions for a variety of Great Plains grassland types, and then illustrate the implications of using different NRVs based on these quantities for setting management decision thresholds. Temporal variability of richness (as measured by the coefficient of variation, CV) is fairly consistent across the wide variety of conditions occurring in Colorado shortgrass prairie to Minnesota tallgrass sand savanna (CV 0.20–0.45) and generally less than that of production at the same sites. Temporal variability of evenness spans a greater range of CV than richness, and it is greater than that of production in some sites but less in other sites. This natural temporal variability may mask undesirable changes in Great Plains grasslands vegetation. Consequently, we suggest that managers consider using a relatively narrow NRV (interquartile range of all richness or evenness values observed in reference conditions) for designating a surveillance threshold, at which greater attention to the situation would be paid, and a broader NRV for designating management thresholds, at which action would be instigated.
Modeled summer background concentration nutrients and ...
We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Suppression of threshold voltage variability in MOSFETs by adjustment of ion implantation parameters
NASA Astrophysics Data System (ADS)
Park, Jae Hyun; Chang, Tae-sig; Kim, Minsuk; Woo, Sola; Kim, Sangsig
2018-01-01
In this study, we investigate threshold voltage (VTH) variability of metal-oxide-semiconductor field-effect transistors induced by random dopant fluctuation (RDF). Our simulation work demonstrates not only the influence of the implantation parameters such as its dose, tilt angle, energy, and rotation angle on the RDF-induced VTH variability, but also the solution to reduce the effect of this variability. By adjusting the ion implantation parameters, the 3σ (VTH) is reduced from 43.8 mV to 28.9 mV. This 34% reduction is significant, considering that our technique is very cost effective and facilitates easy fabrication, increasing availability.
Rebuilding DEMATEL threshold value: an example of a food and beverage information system.
Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin
2016-01-01
This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.
Control of growth of juvenile leaves of Eucalyptus globulus: effects of leaf age.
Metcalfe, J C; Davies, W J; Pereira, J S
1991-12-01
Biophysical variables influencing the expansion of plant cells (yield threshold, cell wall extensibility and turgor) were measured in individual Eucalyptus globulus leaves from the time of emergence until cessation of growth. Leaf water relations variables and growth rates were determined as relative humidity was changed on an hourly basis. Yield threshold and cell wall extensibility were estimated from plots of leaf growth rate versus turgor. Cell wall extensibility was also measured by the Instron technique, and yield threshold was determined experimentally both by stress relaxation in a psychrometer chamber and by incubation in a range of polyethylene glycol solutions. Once emerging leaves reached approximately 5 cm(2) in size, increases in leaf area were rapid throughout the expansive phase and varied little between light and dark periods. Both leaf growth rate and turgor were sensitive to changes in humidity, and in the longer term, both yield threshold and cell wall extensibility changed as the leaf aged. Rapidly expanding leaves had a very low yield threshold and high cell wall extensibility, whereas mature leaves had low cell wall extensibility. Yield threshold increased with leaf age.
Mitochondrial threshold effects.
Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry
2003-01-01
The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494
Barreiro, Jesús; Castro-Feijoo, Lidia; Colón, Cristóbal; Cabanas, Paloma; Heredia, Claudia; Castaño, Luis Antonio; Gómez-Lado, Carmen; Couce, M.Luz; Pombo, Manuel
2011-01-01
We report a case of congenital hypothyroidism (CH) with neurological and respiratory alterations due to a heterozygotic c.374-1G > A mutation of TITF1/NKX2-1. The hypothyroidism was detected using a neonatal screening protocol in which the thyroid stimulating hormone (TSH) threshold is re-set each day on the basis of within-day variability and between-day variation. In this case, the threshold on the day of the initial analysis was 8.2 mIU/L, and the measured TSH level in heel-prick blood was 8.3 mIU/L. Conflict of interest:None declared. PMID:22155464
Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse
2017-01-01
We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
NASA Astrophysics Data System (ADS)
Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.
2018-01-01
Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a constant glacier area results in a drastic decrease of the number of droughts. The drought characteristics between future and historical periods are more similar when the transient threshold is used, for both glacier area conceptualisations. With the transient threshold, factors causing future droughts can be analysed. This study revealed the different effects of methodological choices on future streamflow drought projections and it highlights how the options can be used to analyse different aspects of future droughts: the transient threshold for analysing future drought processes, the historical threshold to assess changes between periods, the constant glacier area to analyse the effect of short-term climate variability on droughts and the dynamic glacier area to model more realistic future discharges under climate change.
Study of communications data compression methods
NASA Technical Reports Server (NTRS)
Jones, H. W.
1978-01-01
A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.
Elizabeth A. Freeman; Gretchen G. Moisen
2008-01-01
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
A ruggedness evaluation of procedures for damage threshold testing optical materials
NASA Technical Reports Server (NTRS)
Hooker, Matthew W.; Thomas, Milfred E.; Wise, Stephanie A.; Tappan, Nina D.
1995-01-01
A ruggedness evaluation of approaches to damage threshold testing was performed to determine the influence of three procedural variables on damage threshold data. The differences between the number of test sites evaluated at an applied fluence level (1 site versus 10 sites), the number of laser pulses at each test site (1 pulse versus 200 pulses), and the beam diameter (0.35 mm versus 0.70 mm) were all found to significantly influence the damage threshold data over a 99-percent confidence interval.
Precipitation phase partitioning variability across the Northern Hemisphere
NASA Astrophysics Data System (ADS)
Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.
2017-12-01
Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Octave-Band Thresholds for Modeled Reverberant Fields
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
An integrative perspective of the anaerobic threshold.
Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo
2017-12-14
The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Deal, Eric; Braun, Jean
2017-04-01
Climatic forcing undoubtedly plays an important role in shaping the Earth's surface. However, precisely how climate affects erosion rates, landscape morphology and the sedimentary record is highly debated. Recently there has been a focus on the influence of short-term variability in rainfall and river discharge on the relationship between climate and erosion rates. Here, we present a simple probabilistic argument, backed by modelling, that demonstrates that the way the Earth's surface responds to short-term climatic forcing variability is primarily determined by the existence and magnitude of erosional thresholds. We find that it is the ratio between the threshold magnitude and the mean magnitude of climatic forcing that determines whether variability matters or not and in which way. This is a fundamental result that applies regardless of the nature of the erosional process. This means, for example, that we can understand the role that discharge variability plays in determining fluvial erosion efficiency despite doubts about the processes involved in fluvial erosion. We can use this finding to reproduce the main conclusions of previous studies on the role of discharge variability in determining long-term fluvial erosion efficiency. Many aspects of the landscape known to influence discharge variability are affected by human activity, such as land use and river damming. Another important control on discharge variability, rainfall intensity, is also expected to increase with warmer temperatures. Among many other implications, our findings help provide a general framework to understand and predict the response of the Earth's surface to changes in mean and variability of rainfall and river discharge associated with the anthropogenic activity. In addition, the process independent nature of our findings suggest that previous work on river discharge variability and erosion thresholds can be applied to other erosional systems.
Bayesian change-point analyses in ecology
Brian Bekcage; Lawrence Joseph; Patrick Belisle; David B. Wolfson; William J. Platt
2007-01-01
Ecological and biological processes can change from one state to another once a threshold has been crossed in space or time. Threshold responses to incremental changes in underlying variables can characterize diverse processes from climate change to the desertification of arid lands from overgrazing.
Optoelectronic Integrated Circuits For Neural Networks
NASA Technical Reports Server (NTRS)
Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.
1990-01-01
Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.
Receiver Operating Characteristic Curve Analysis of Beach Water Quality Indicator Variables
Morrison, Ann Michelle; Coughlin, Kelly; Shine, James P.; Coull, Brent A.; Rex, Andrea C.
2003-01-01
Receiver operating characteristic (ROC) curve analysis is a simple and effective means to compare the accuracies of indicator variables of bacterial beach water quality. The indicator variables examined in this study were previous day's Enterococcus density and antecedent rainfall at 24, 48, and 96 h. Daily Enterococcus densities and 15-min rainfall values were collected during a 5-year (1996 to 2000) study of four Boston Harbor beaches. The indicator variables were assessed for their ability to correctly classify water as suitable or unsuitable for swimming at a maximum threshold Enterococcus density of 104 CFU/100 ml. Sensitivity and specificity values were determined for each unique previous day's Enterococcus density and antecedent rainfall volume and used to construct ROC curves. The area under the ROC curve was used to compare the accuracies of the indicator variables. Twenty-four-hour antecedent rainfall classified elevated Enterococcus densities more accurately than previous day's Enterococcus density (P = 0.079). An empirically derived threshold for 48-h antecedent rainfall, corresponding to a sensitivity of 0.75, was determined from the 1996 to 2000 data and evaluated to ascertain if the threshold would produce a 0.75 sensitivity with independent water quality data collected in 2001 from the same beaches. PMID:14602593
Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect
NASA Technical Reports Server (NTRS)
Strand, L. D.; Cohen, Norman S.
1990-01-01
Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.
Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.
2013-01-01
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479
Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A
2013-10-22
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.
DuBois, P Mason; Shea, Tanner K; Claunch, Natalie M; Taylor, Emily N
2017-08-01
Thermal tolerance is an important variable in predictive models about the effects of global climate change on species distributions, yet the physiological mechanisms responsible for reduced performance at high temperatures in air-breathing vertebrates are not clear. We conducted an experiment to examine how oxygen affects three variables exhibited by ectotherms as they heat-gaping threshold, panting threshold, and loss of righting response (the latter indicating the critical thermal maximum)-in two lizard species along an elevational (and therefore environmental oxygen partial pressure) gradient. Oxygen partial pressure did not impact these variables in either species. We also exposed lizards at each elevation to severely hypoxic gas to evaluate their responses to hypoxia. Severely low oxygen partial pressure treatments significantly reduced the gaping threshold, panting threshold, and critical thermal maximum. Further, under these extreme hypoxic conditions, these variables were strongly and positively related to partial pressure of oxygen. In an elevation where both species overlapped, the thermal tolerance of the high elevation species was less affected by hypoxia than that of the low elevation species, suggesting the high elevation species may be adapted to lower oxygen partial pressures. In the high elevation species, female lizards had higher thermal tolerance than males. Our data suggest that oxygen impacts the thermal tolerance of lizards, but only under severely hypoxic conditions, possibly as a result of hypoxia-induced anapyrexia. Copyright © 2017. Published by Elsevier Ltd.
Dartnall, Tamara J; Rogasch, Nigel C; Nordstrom, Michael A; Semmler, John G
2009-07-01
The purpose of this study was to determine the effect of eccentric muscle damage on recruitment threshold force and repetitive discharge properties of low-threshold motor units. Ten subjects performed four tasks involving isometric contraction of elbow flexors while electromyographic (EMG) data were recorded from human biceps brachii and brachialis muscles. Tasks were 1) maximum voluntary contraction (MVC); 2) constant-force contraction at various submaximal targets; 3) motor unit recruitment threshold task; and 4) minimum motor unit discharge rate task. These tasks were performed on three separate days before, immediately after, and 24 h after eccentric exercise of elbow flexor muscles. MVC force declined (42%) immediately after exercise and remained depressed (29%) 24 h later, indicative of muscle damage. Mean motor unit recruitment threshold for biceps brachii was 8.4+/-4.2% MVC, (n=34) before eccentric exercise, and was reduced by 41% (5.0+/-3.0% MVC, n=34) immediately after and by 39% (5.2+/-2.5% MVC, n=34) 24 h after exercise. No significant changes in motor unit recruitment threshold were observed in the brachialis muscle. However, for the minimum tonic discharge rate task, motor units in both muscles discharged 11% faster (10.8+/-2.0 vs. 9.7+/-1.7 Hz) immediately after (n=29) exercise compared with that before (n=32). The minimum discharge rate variability was greater in brachialis muscle immediately after exercise (13.8+/-3.1%) compared with that before (11.9+/-3.1%) and 24 h after exercise (11.7+/-2.4%). No significant changes in minimum discharge rate variability were observed in the biceps brachii motor units after exercise. These results indicate that muscle damage from eccentric exercise alters motor unit recruitment thresholds for >or=24 h, but the effect is not the same in the different elbow flexor muscles.
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H
2016-10-01
The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.
Marshall, Ethan A; Miller, Holly A; Bouffard, Jeff A
2017-11-01
According to recent statistics, as many as one in five female college students are victims of sexual assault during their college career. To combat what has been called the "Campus Rape Crisis," researchers have attempted to understand what variables are associated with sexually coercive behaviors in college males. Although investigators have found support for the relationship between pornography consumption and sexually coercive behavior, researchers typically operationalize pornography use in terms of frequency of use. Furthermore, frequency of use has been assessed vaguely and inconsistently. The current study offered a more concrete assessment of frequency of use and an additional variable not yet included for pornography use: number of modalities. Beyond examining the relationship between pornography use and sexual coercion likelihood, the current study was the first to use pornography variables in a threshold analysis to test whether there is a cut point that is predictive of sexual coercion likelihood. Analyses were conducted with a sample of 463 college males. Results indicated that both pornography use variables were significantly related to a higher likelihood of sexually coercive behaviors. When both frequency of use and number of modalities were included in the model, modalities were significant and frequency was not. In addition, significant thresholds for both pornography variables that predicted sexual coercion likelihood were identified. These results imply that factors other than frequency of use, such as number of modalities, may be more important for the prediction of sexual coercive behaviors. Furthermore, threshold analyses revealed the most significant increase in risk occurred between one modality and two, indicating that it is not pornography use in general that is related to sexual coercion likelihood, but rather, specific aspects of pornography use.
Massot, Corentin; Chacron, Maurice J.
2011-01-01
Understanding how sensory neurons transmit information about relevant stimuli remains a major goal in neuroscience. Of particular relevance are the roles of neural variability and spike timing in neural coding. Peripheral vestibular afferents display differential variability that is correlated with the importance of spike timing; regular afferents display little variability and use a timing code to transmit information about sensory input. Irregular afferents, conversely, display greater variability and instead use a rate code. We studied how central neurons within the vestibular nuclei integrate information from both afferent classes by recording from a group of neurons termed vestibular only (VO) that are known to make contributions to vestibulospinal reflexes and project to higher-order centers. We found that, although individual central neurons had sensitivities that were greater than or equal to those of individual afferents, they transmitted less information. In addition, their velocity detection thresholds were significantly greater than those of individual afferents. This is because VO neurons display greater variability, which is detrimental to information transmission and signal detection. Combining activities from multiple VO neurons increased information transmission. However, the information rates were still much lower than those of equivalent afferent populations. Furthermore, combining responses from multiple VO neurons led to lower velocity detection threshold values approaching those measured from behavior (∼2.5 vs. 0.5–1°/s). Our results suggest that the detailed time course of vestibular stimuli encoded by afferents is not transmitted by VO neurons. Instead, they suggest that higher vestibular pathways must integrate information from central vestibular neuron populations to give rise to behaviorally observed detection thresholds. PMID:21307329
Revisiting gender, race, and ear differences in peripheral auditory function
NASA Astrophysics Data System (ADS)
Boothalingam, Sriram; Klyn, Niall A. M.; Stiepan, Samantha M.; Wilson, Uzma S.; Lee, Jungwha; Siegel, Jonathan H.; Dhar, Sumitrajit
2018-05-01
Various measures of auditory function are reported to be superior in females as compared to males, in African American compared to Caucasian individuals, and in right compared to left ears. We re-examined the influence of these subject variables on hearing thresholds and otoacoustic emissions (OAEs) in a sample of 887 human participants between 10 and 68 years of age. Even though the variables of interest here have been examined before, previous attempts have largely been limited to frequencies up to 8 kHz. We used state-of-the-art signal delivery and recording techniques that compensated for individual differences in ear canal acoustics, allowing us to measure hearing thresholds and OAEs up to 20 kHz. The use of these modern calibration and recording techniques provided the motivation for re-examining these commonly studied variables. While controlling for age, noise exposure history, and general health history, we attempted to isolate the effects of gender, race, and ear (left versus right) on hearing thresholds and OAEs. Our results challenge the notion of a right ear advantage and question the existence of a significant gender and race differences in both hearing thresholds and OAE levels. These results suggest that ear canal anatomy and acoustics should be important considerations when evaluating the influence of gender, race, and ear on peripheral auditory function.
Wall, Michael; Woodward, Kimberly R; Doyle, Carrie K; Artes, Paul H
2009-02-01
Standard automated perimetry (SAP) shows a marked increase in variability in damaged areas of the visual field. This study was conducted to test the hypothesis that larger stimuli are associated with more uniform variability, by investigating the retest variability of four perimetry tests: standard automated perimetry size III (SAP III), with the SITA standard strategy; SAP size V (SAP V), with the full-threshold strategy; Matrix (FDT II), and Motion perimetry. One eye each of 120 patients with glaucoma was examined on the same day with these four perimetric tests and retested 1 to 8 weeks later. The decibel scales were adjusted to make the test's scales numerically similar. Retest variability was examined by establishing the distributions of retest threshold estimates, for each threshold level observed at the first test. The 5th and 95th percentiles of the retest distribution were used as point-wise limits of retest variability. Regression analyses were performed to quantify the relationship between visual field sensitivity and variability. With SAP III, the retest variability increased substantially with reducing sensitivity. Corresponding increases with SAP V, Matrix, and Motion perimetry were considerably smaller or absent. With SAP III, sensitivity explained 22% of the retest variability (r(2)), whereas corresponding data for SAP V, Matrix, and Motion perimetry were 12%, 2%, and 2%, respectively. Variability of Matrix and Motion perimetry does not increase as substantially as that of SAP III in damaged areas of the visual field. Increased sampling with the larger stimuli of these techniques is the likely explanation for this finding. These properties may make these stimuli excellent candidates for early detection of visual field progression.
NASA Astrophysics Data System (ADS)
Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.
2012-03-01
Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.
Temporal resolution in children.
Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D
1989-06-01
The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.
Guenole, Nigel; Brown, Anna
2014-01-01
We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911
Terauchi, Rie; Arai, Korenori; Tanaka, Masahiro; Kawazoe, Takayoshi; Baba, Shunsuke
2015-01-01
Implant treatment is believed to cause minimal invasion of remaining teeth. However, few studies have examined teeth adjacent to an implant region. Therefore, this study investigated the effect of occlusal contact size of implants on the periodontal mechanosensitive threshold of adjacent premolars. A cross-sectional study design was adopted. The Department of Oral Implantology, Osaka Dental University, was the setting where patients underwent implant treatment in the mandibular free-end edentulous area. The study population comprised of 87 patients (109 teeth) who underwent follow-up observation for at least 3 years following implant superstructure placement. As variables, age, sex, duration following superstructure placement, presence or absence of dental pulp, occlusal contact area, and periodontal mechanosensitive threshold were considered. The occlusal contact area was measured using Blue Silicone(®)and Bite Eye BE-I(®). Periodontal mechanosensitive threshold were measured using von Frey hair. As quantitative variables for periodontal mechanosensitive threshold, we divided subjects into two groups: normal (≤5 g) and high (≥5.1 g). For statistical analysis, we compared the two groups for the sensation thresholds using the Chi square test for categorical data and the Mann-Whitney U test for continuous volume data. For variables in which a significant difference was noted, we calculated the odds ratio (95 % confidence interval) and the effective dose. There were 93 teeth in the normal group and 16 teeth in the high group based on periodontal mechanosensitive threshold. Comparison of the two groups indicated no significant differences associated with age, sex, duration following superstructure placement, or presence or absence of dental pulp. A significant difference was noted with regard to occlusal contact area, with several high group subjects belonging to the small contact group (odds ratio: 4.75 [1.42-15.87]; effective dose: 0.29). The results of this study suggest an association between implant occlusal contact area and the periodontal mechanosensitive threshold of adjacent premolars. Smaller occlusal contact application resulted in an increased threshold. It appears that prosthodontic treatment should aim not only to improve occlusal function but also to maintain oromandibular function with regard to the preservation of remaining teeth.
Multivariate Analyses of Balance Test Performance, Vestibular Thresholds, and Age
Karmali, Faisal; Bermúdez Rey, María Carolina; Clark, Torin K.; Wang, Wei; Merfeld, Daniel M.
2017-01-01
We previously published vestibular perceptual thresholds and performance in the Modified Romberg Test of Standing Balance in 105 healthy humans ranging from ages 18 to 80 (1). Self-motion thresholds in the dark included roll tilt about an earth-horizontal axis at 0.2 and 1 Hz, yaw rotation about an earth-vertical axis at 1 Hz, y-translation (interaural/lateral) at 1 Hz, and z-translation (vertical) at 1 Hz. In this study, we focus on multiple variable analyses not reported in the earlier study. Specifically, we investigate correlations (1) among the five thresholds measured and (2) between thresholds, age, and the chance of failing condition 4 of the balance test, which increases vestibular reliance by having subjects stand on foam with eyes closed. We found moderate correlations (0.30–0.51) between vestibular thresholds for different motions, both before and after using our published aging regression to remove age effects. We found that lower or higher thresholds across all threshold measures are an individual trait that account for about 60% of the variation in the population. This can be further distributed into two components with about 20% of the variation explained by aging and 40% of variation explained by a single principal component that includes similar contributions from all threshold measures. When only roll tilt 0.2 Hz thresholds and age were analyzed together, we found that the chance of failing condition 4 depends significantly on both (p = 0.006 and p = 0.013, respectively). An analysis incorporating more variables found that the chance of failing condition 4 depended significantly only on roll tilt 0.2 Hz thresholds (p = 0.046) and not age (p = 0.10), sex nor any of the other four threshold measures, suggesting that some of the age effect might be captured by the fact that vestibular thresholds increase with age. For example, at 60 years of age, the chance of failing is roughly 5% for the lowest roll tilt thresholds in our population, but this increases to 80% for the highest roll tilt thresholds. These findings demonstrate the importance of roll tilt vestibular cues for balance, even in individuals reporting no vestibular symptoms and with no evidence of vestibular dysfunction. PMID:29167656
Dose-response relationships for the onset of avoidance of sonar by free-ranging killer whales.
Miller, Patrick J O; Antunes, Ricardo N; Wensveen, Paul J; Samarra, Filipa I P; Alves, Ana Catarina; Tyack, Peter L; Kvadsheim, Petter H; Kleivane, Lars; Lam, Frans-Peter A; Ainslie, Michael A; Thomas, Len
2014-02-01
Eight experimentally controlled exposures to 1-2 kHz or 6-7 kHz sonar signals were conducted with four killer whale groups. The source level and proximity of the source were increased during each exposure in order to reveal response thresholds. Detailed inspection of movements during each exposure session revealed sustained changes in speed and travel direction judged to be avoidance responses during six of eight sessions. Following methods developed for Phase-I clinical trials in human medicine, response thresholds ranging from 94 to 164 dB re 1 μPa received sound pressure level (SPL) were fitted to Bayesian dose-response functions. Thresholds did not consistently differ by sonar frequency or whether a group had previously been exposed, with a mean SPL response threshold of 142 ± 15 dB (mean ± s.d.). High levels of between- and within-individual variability were identified, indicating that thresholds depended upon other undefined contextual variables. The dose-response functions indicate that some killer whales started to avoid sonar at received SPL below thresholds assumed by the U.S. Navy. The predicted extent of habitat over which avoidance reactions occur depends upon whether whales responded to proximity or received SPL of the sonar or both, but was large enough to raise concerns about biological consequences to the whales.
Non-equilibrium Green's functions study of discrete dopants variability on an ultra-scaled FinFET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valin, R., E-mail: r.valinferreiro@swansea.ac.uk; Martinez, A., E-mail: a.e.Martinez@swansea.ac.uk; Barker, J. R., E-mail: john.barker@glasgow.ac.uk
In this paper, we study the effect of random discrete dopants on the performance of a 6.6 nm channel length silicon FinFET. The discrete dopants have been distributed randomly in the source/drain region of the device. Due to the small dimensions of the FinFET, a quantum transport formalism based on the non-equilibrium Green's functions has been deployed. The transfer characteristics for several devices that differ in location and number of dopants have been calculated. Our results demonstrate that discrete dopants modify the effective channel length and the height of the source/drain barrier, consequently changing the channel control of the charge. Thismore » effect becomes more significant at high drain bias. As a consequence, there is a strong effect on the variability of the on-current, off-current, sub-threshold slope, and threshold voltage. Finally, we have also calculated the mean and standard deviation of these parameters to quantify their variability. The obtained results show that the variability at high drain bias is 1.75 larger than at low drain bias. However, the variability of the on-current, off-current, and sub-threshold slope remains independent of the drain bias. In addition, we have found that a large source to drain current by tunnelling current occurs at low gate bias.« less
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.
von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H
2016-10-26
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. Copyright © 2016 the authors 0270-6474/16/3611097-10$15.00/0.
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance
Buran, Bradley N.; Sen, Kamal; Semple, Malcolm N.; Sanes, Dan H.
2016-01-01
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. PMID:27798189
Eisen, Marc D; Franck, Kevin H
2004-12-01
To characterize the amplitude growth functions of the electrically evoked compound action potential (ECAP) in pediatric subjects implanted with the Clarion HiFocus electrode array with respect to electrode position and the presence or absence of a Silastic positioner. Electrophysiologic growth function data are compared with HiResolution (HiRes) psychophysical programming levels. ECAP growth functions were measured for all electrodes along the implant's array in 16 pediatric subjects. Nine of the patients were implanted with a Silastic positioner, whereas seven had no positioner. ECAP thresholds and growth function slopes were calculated. Fifteen of the 16 patients had psychophysical threshold and maximum comfort levels available. Programming levels and ECAP thresholds were compared within and among the subjects. ECAP thresholds showed variability among patients, ranging from 178 to 920 nA at 32 musec pulse width. ECAP thresholds did not depend on electrode position along the cochlea but were lower in the presence of the Silastic positioner (p < 0.001). Thresholds determined with the masker-probe versus the alternating polarity paradigms revealed moderate (r = 0.76) correlation. Growth function slopes also showed considerable variation among patients. Unlike thresholds, slopes decreased from apical to basal cochlear locations (p < 0.001) but showed no difference between the absence and presence of the positioner. Programming levels in HiRes were correlated with ECAP threshold levels. When ECAP thresholds were adjusted for each patient by the difference between M level and ECAP threshold at electrode 9, however, overall correlation between the two measurements was excellent (r = 0.98, N = 224). In pediatric subjects with the Clarion HiFocus electrode, ECAP growth function thresholds appear to decrease with the presence of the Silastic positioner but are unaffected by electrode position along the array. Growth function slope, however, depends on electrode position along the array but not on the presence of the positioner. ECAP programming levels can reliably predict stimulus intensities within the patients' dynamic ranges, but considerable variability is seen between ECAP thresholds and HiRes programming levels.
Perreault, Michel; Julien, Dominic; White, Noe Djawn; Rabouin, Daniel; Lauzon, Pierre; Milton, Diana
2015-01-01
This study investigated the role of psychological variables and judicial problems in treatment retention for a low-threshold methadone program in Montreal, Canada. Logistic regression analyses were computed to examine associations between psychological variables (psychological distress, self-esteem, stages of change), criminal justice involvement, and treatment retention for 106 highly-disorganized opioid users. Higher methadone dosage was associated with increased odds of treatment retention, whereas criminal charges and lower self-esteem decreased these odds. Psychological variables could be identified early in treatment and targeted to increase potential treatment retention. Financial support for this study was provided by the Fonds de Recherche en Santé du Québec.
Wester, Jason C.
2013-01-01
Spike threshold filters incoming inputs and thus gates activity flow through neuronal networks. Threshold is variable, and in many types of neurons there is a relationship between the threshold voltage and the rate of rise of the membrane potential (dVm/dt) leading to the spike. In primary sensory cortex this relationship enhances the sensitivity of neurons to a particular stimulus feature. While Na+ channel inactivation may contribute to this relationship, recent evidence indicates that K+ currents located in the spike initiation zone are crucial. Here we used a simple Hodgkin-Huxley biophysical model to systematically investigate the role of K+ and Na+ current parameters (activation voltages and kinetics) in regulating spike threshold as a function of dVm/dt. Threshold was determined empirically and not estimated from the shape of the Vm prior to a spike. This allowed us to investigate intrinsic currents and values of gating variables at the precise voltage threshold. We found that Na+ inactivation is sufficient to produce the relationship provided it occurs at hyperpolarized voltages combined with slow kinetics. Alternatively, hyperpolarization of the K+ current activation voltage, even in the absence of Na+ inactivation, is also sufficient to produce the relationship. This hyperpolarized shift of K+ activation allows an outward current prior to spike initiation to antagonize the Na+ inward current such that it becomes self-sustaining at a more depolarized voltage. Our simulations demonstrate parameter constraints on Na+ inactivation and the biophysical mechanism by which an outward current regulates spike threshold as a function of dVm/dt. PMID:23344915
Fuentes, Juan P; Villafaina, Santos; Collado-Mateo, Daniel; de la Vega, Ricardo; Gusi, Narcis; Clemente-Suárez, Vicente Javier
2018-01-19
Psychophysiological requirements of chess players are poorly understood, and periodization of training is often made without any empirical basis. For this reason, the aim of the present study was to investigate the psychophysiological response and quantify the player internal load during, and after playing a chess game. The participant was an elite 33 year-old male chess player ranked among the 300 best chess players in the world. Thus, cortical arousal by critical flicker fusion threshold, electroencephalogram by the theta Fz/alpha Pz ratio and autonomic modulation by heart rate variability were analyzed. Data revealed that cortical arousal by critical flicker fusion threshold and theta Fz/alpha Pz ratio increased and heart rate variability decreased during chess game. All these changes indicated that internal load increased during the chess game. In addition, pre-activation was detected in pre-game measure, suggesting that the prefrontal cortex might be preparatory activated. For these reasons, electroencephalogram, critical flicker fusion threshold and heart rate variability analysis may be highly applicable tools to control and monitor workload in chess player.
Residence time control on hot moments of net nitrate production and uptake in the hyporheic zone
Briggs, Martin A.; Lautz, Laura K.; Hare, Danielle K.
2014-01-01
moments of net production and uptake, enhancing NO3- production as residence times approach the anaerobic threshold, and changing zones of net NO3- production to uptake as residence times increase past the net sink threshold. The anaerobic and net sink thresholds for beaver-influenced streambed morphology occur at much shorter residence times (1.3 h and 2.3 h, respectively) compared to other documented hyporheic systems, and the net sink threshold compares favorably to the lower boundary of the anaerobic threshold determined for this system with the new oxygen Damkohler number. The consistency of the residence time threshold values of NO3- cycling in this study, despite environmental variability and disparate morphology, indicates that NO3- hot moment dynamics are primarily driven by changes in physical hydrology and associated residence times.
Measurement of visual contrast sensitivity
NASA Astrophysics Data System (ADS)
Vongierke, H. E.; Marko, A. R.
1985-04-01
This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.
Rowlands, Gillian; Protheroe, Joanne; Winkley, John; Richardson, Marty; Seed, Paul T; Rudd, Rima
2015-06-01
Low health literacy is associated with poorer health and higher mortality. Complex health materials are a barrier to health. To assess the literacy and numeracy skills required to understand and use commonly used English health information materials, and to describe population skills in relation to these. An English observational study comparing health materials with national working-age population skills. Health materials were sampled using a health literacy framework. Competency thresholds to understand and use the materials were identified. The proportion of the population above and below these thresholds, and the sociodemographic variables associated with a greater risk of being below the thresholds, were described. Sixty-four health materials were sampled. Two competency thresholds were identified: text (literacy) only, and text + numeracy; 2515/5795 participants (43%) were below the text-only threshold, while 2905/4767 (61%) were below the text + numeracy threshold. Univariable analyses of social determinants of health showed that those groups more at risk of socioeconomic deprivation had higher odds of being below the health literacy competency threshold than those at lower risk of deprivation. Multivariable analysis resulted in some variables becoming non-significant or reduced in effect. Levels of low health literacy mirror those found in other industrialised countries, with a mismatch between the complexity of health materials and the skills of the English adult working-age population. Those most in need of health information have the least access to it. Efficacious strategies are building population skills, improving health professionals' communication, and improving written health information. © British Journal of General Practice 2015.
Grebenstein, Patricia E; Burroughs, Danielle; Roiko, Samuel A; Pentel, Paul R; LeSage, Mark G
2015-06-01
The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. The present study examined these issues in a rodent nicotine self-administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Park, Sung Wook; Brenneman, Michael; Cooke, William H; Cordova, Alberto; Fogt, Donovan
The purpose was to determine if heart rate (HR) and heart rate variability (HRV) responses would reflect anaerobic threshold (AT) using a discontinuous, incremental, cycle test. AT was determined by ventilatory threshold (VT). Cyclists (30.6±5.9y; 7 males, 8 females) completed a discontinuous cycle test consisting of 7 stages (6 min each with 3 min of rest between). Three stages were performed at power outputs (W) below those corresponding to a previously established AT, one at W corresponding to AT, and 3 at W above those corresponding to AT. The W at the intersection of the trend lines was considered each metric's "threshold". The averaged stage data for Ve, HR, and time- and frequency-domain HRV metrics were plotted versus W. The W at the "threshold" for the metrics of interest were compared using correlation analysis and paired-sample t -test. In all, several heart rate-related parameters accurately reflected AT with significant correlations (p≤0.05) were observed between AT W and HR, mean RR interval (MRR), low and high frequency spectral energy (LF and HR, respectively), high frequency peak (fHF), and HFxfHF metrics' threshold W (i.e., MRRTW, etc.). Differences in HR or HRV metric threshold W and AT for all subjects were less than 14 W. The steady state data from discontinuous protocols may allow for a true indication of steady-state physiologic stress responses and corresponding W at AT, compared to continuous protocols using 1-2 min exercise stages.
Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.
2015-01-01
Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231
USDA-ARS?s Scientific Manuscript database
Background/Question/Methods: Ecosystem thresholds are often identified by observing or inducing slow changes in different driver variables and investigating changes in the asymptotic state of the system, such as the response of lakes to nutrient loading or biome responses to climate change. Yet ma...
Dynamical predictors of an imminent phenotypic switch in bacteria
NASA Astrophysics Data System (ADS)
Wang, Huijing; Ray, J. Christian J.
2017-08-01
Single cells can stochastically switch across thresholds imposed by regulatory networks. Such thresholds can act as a tipping point, drastically changing global phenotypic states. In ecology and economics, imminent transitions across such tipping points can be predicted using dynamical early warning indicators. A typical example is ‘flickering’ of a fast variable, predicting a longer-lasting switch from a low to a high state or vice versa. Considering the different timescales between metabolite and protein fluctuations in bacteria, we hypothesized that metabolic early warning indicators predict imminent transitions across a network threshold caused by enzyme saturation. We used stochastic simulations to determine if flickering predicts phenotypic transitions, accounting for a variety of molecular physiological parameters, including enzyme affinity, burstiness of enzyme gene expression, homeostatic feedback, and rates of metabolic precursor influx. In most cases, we found that metabolic flickering rates are robustly peaked near the enzyme saturation threshold. The degree of fluctuation was amplified by product inhibition of the enzyme. We conclude that sensitivity to flickering in fast variables may be a possible natural or synthetic strategy to prepare physiological states for an imminent transition.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.
Menicucci, Nicolas C
2014-03-28
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans
2012-08-01
Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.
Risk assessment of metal vapor arcing
NASA Technical Reports Server (NTRS)
Hill, Monika C. (Inventor); Leidecker, Henning W. (Inventor)
2009-01-01
A method for assessing metal vapor arcing risk for a component is provided. The method comprises acquiring a current variable value associated with an operation of the component; comparing the current variable value with a threshold value for the variable; evaluating compared variable data to determine the metal vapor arcing risk in the component; and generating a risk assessment status for the component.
Variability as a Subject Matter in a Science of Behavior: Reply to Commentaries
ERIC Educational Resources Information Center
Barba, Lourenco de Souza
2012-01-01
In his article, the author claimed that studies of operant variability that use a lag-"n" or threshold procedure and measure the obtained variability through the change in U value fail to provide direct evidence that variability is an operant dimension of behavior. To do so, he adopted Catania's (1973) concept of the operant, which takes the…
Son, Jaebum; Ashton-Miller, James A; Richardson, James K
2010-05-01
To determine whether ankle orthoses that provide medial and lateral support, and have been found to decrease gait variability in older persons with peripheral neuropathy, decrease (improve) frontal plane ankle proprioceptive thresholds or increase unipedal stance time in that same population. Observational study in which unipedal stance time was determined with a stopwatch, and frontal plane ankle (inversion and eversion) proprioceptive thresholds were quantified during bipedal stance using a foot cradle system and a series of 100 rotational stimuli, in 11 older neuropathic subjects (8 men; age 72 +/- 7.1 yr) with and without ankle orthoses. The subjects demonstrated no change in combined frontal plane (inversion + eversion) proprioceptive thresholds or unipedal stance time with vs. without the orthoses (1.06 +/- 0.56 vs. 1.13 +/- 0.39 degrees, respectively; P = 0.955 and 6.1 +/- 6.5 vs. 6.2 +/- 5.4 secs, respectively; P = 0.922). Ankle orthoses that provide medial-lateral support do not seem to change ankle inversion/eversion proprioceptive thresholds or unipedal stance time in older persons with diabetic peripheral neuropathy. Previously identified improvements in gait variability using orthoses in this population are therefore likely related to an orthotically induced stiffening of the ankle rather than a change in ankle afferent function.
A human visual based binarization technique for histological images
NASA Astrophysics Data System (ADS)
Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos
2017-05-01
In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.
Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon
2017-01-01
[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765
Development of an epiphyte indicator of nutrient enrichment ...
Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among
Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok
2017-09-30
The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition
Hafnium transistor process design for neural interfacing.
Parent, David W; Basham, Eric J
2009-01-01
A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.
A masking level difference due to harmonicity.
Treurniet, W C; Boucher, D R
2001-01-01
The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.
Gaffin, Jonathan M.; Shotola, Nancy Lichtenberg; Martin, Thomas R.; Phipatanakul, Wanda
2010-01-01
Rationale In 2007 the American Thoracic Society (ATS) recommended guidelines for acceptability and repeatability for assessing spirometry in preschool children. The authors aim to determine the feasibility of spirometry among children in this age group performing spirometry for the first time in a busy clinical practice. Methods First-time spirometry for children age 4 to 5 years old was selected from the Children’s Hospital Boston Pulmonary Function Test (PFT) database. Maneuvers were deemed acceptable if (1) the flow-volume loop showed rapid rise and smooth descent; (2) the back extrapolated volume (Vbe), the volume leaked by a subject prior to the forced maneuver, was ≤80 ml and 12.5% of forced vital capacity (FVC); and (3) cessation of expiratory flow was at a point ≤10% of peak expiratory flow rate (PEFR). Repeatability was determined by another acceptable maneuver with forced expiratory volume in t seconds (FEVt) and FVC within 10% or 0.1 L of the best acceptable maneuver. Post hoc analysis compared spirometry values for those with asthma and cystic fibrosis to normative values. Results Two hundred and forty-eight preschool children performed spirometry for the first time between August 26, 2006, and August 25, 2008. At least one technically acceptable maneuver was found in 82.3% (n = 204) of the tests performed. Overall, 54% of children were able to perform acceptable and repeatable spirometry based on the ATS criteria. Children with asthma or cystic fibrosis did not have spirometry values that differed significantly from healthy controls. However, up to 29% of the overall cohort displayed at least one abnormal spirometry value. Conclusions Many preschool-aged children are able to perform technically acceptable and repeatable spirometry under normal conditions in a busy clinical setting. Spirometry may be a useful screen for abnormal lung function in this age group. PMID:20653495
Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges.
Mellin, C; Mouillot, D; Kulbicki, M; McClanahan, T R; Vigliola, L; Bradshaw, C J A; Brainard, R E; Chabanet, P; Edgar, G J; Fordham, D A; Friedlander, A M; Parravicini, V; Sequeira, A M M; Stuart-Smith, R D; Wantiez, L; Caley, M J
2016-02-03
Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions.
Johnson, Kevin A; Baig, Mirza; Ramsey, Dave; Lisanby, Sarah H; Avery, David; McDonald, William M; Li, Xingbao; Bernhardt, Elisabeth R; Haynor, David R; Holtzheimer, Paul E; Sackeim, Harold A; George, Mark S; Nahas, Ziad
2013-03-01
Motor cortex localization and motor threshold determination often guide Transcranial Magnetic Stimulation (TMS) placement and intensity settings for non-motor brain stimulation. However, anatomic variability results in variability of placement and effective intensity. Post-study analysis of the OPT-TMS Study reviewed both the final positioning and the effective intensity of stimulation (accounting for relative prefrontal scalp-cortex distances). We acquired MRI scans of 185 patients in a multi-site trial of left prefrontal TMS for depression. Scans had marked motor sites (localized with TMS) and marked prefrontal sites (5 cm anterior of motor cortex by the "5 cm rule"). Based on a visual determination made before the first treatment, TMS therapy occurred either at the 5 cm location or was adjusted 1 cm forward. Stimulation intensity was 120% of resting motor threshold. The "5 cm rule" would have placed stimulation in premotor cortex for 9% of patients, which was reduced to 4% with adjustments. We did not find a statistically significant effect of positioning on remission, but no patients with premotor stimulation achieved remission (0/7). Effective stimulation ranged from 93 to 156% of motor threshold, and no seizures were induced across this range. Patients experienced remission with effective stimulation intensity ranging from 93 to 146% of motor threshold, and we did not find a significant effect of effective intensity on remission. Our data indicates that individualized positioning methods are useful to reduce variability in placement. Stimulation at 120% of motor threshold, unadjusted for scalp-cortex distances, appears safe for a broad range of patients. Copyright © 2013 Elsevier Inc. All rights reserved.
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David
2003-06-01
To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Effects of urbanization on benthic macroinvertebrate communities in streams, Anchorage, Alaska
Ourso, Robert T.
2001-01-01
The effect of urbanization on stream macroinvertebrate communities was examined by using data gathered during a 1999 reconnaissance of 14 sites in the Municipality of Anchorage, Alaska. Data collected included macroinvertebrate abundance, water chemistry, and trace elements in bed sediments. Macroinvertebrate relative-abundance data were edited and used in metric and index calculations. Population density was used as a surrogate for urbanization. Cluster analysis (unweighted-paired-grouping method) using arithmetic means of macroinvertebrate presence-absence data showed a well-defined separation between urbanized and nonurbanized sites as well as extracted sites that did not cleanly fall into either category. Water quality in Anchorage generally declined with increasing urbanization (population density). Of 59 variables examined, 31 correlated with urbanization. Local regression analysis extracted 11 variables that showed a significant impairment threshold response and 6 that showed a significant linear response. Significant biological variables for determining the impairment threshold in this study were the Margalef diversity index, Ephemeroptera-Plecoptera-Trichoptera taxa richness, and total taxa richness. Significant thresholds were observed in the water-chemistry variables conductivity, dissolved organic carbon, potassium, and total dissolved solids. Significant thresholds in trace elements in bed sediments included arsenic, iron, manganese, and lead. Results suggest that sites in Anchorage that have ratios of population density to road density greater than 70, storm-drain densities greater than 0.45 miles per square mile, road densities greater than 4 miles per square mile, or population densities greater than 125-150 persons per square mile may require further monitoring to determine if the stream has become impaired. This population density is far less than the 1,000 persons per square mile used by the U.S. Census Bureau to define an urban area.
Quantifying patterns of change in marine ecosystem response to multiple pressures.
Large, Scott I; Fay, Gavin; Friedland, Kevin D; Link, Jason S
2015-01-01
The ability to understand and ultimately predict ecosystem response to multiple pressures is paramount to successfully implement ecosystem-based management. Thresholds shifts and nonlinear patterns in ecosystem responses can be used to determine reference points that identify levels of a pressure that may drastically alter ecosystem status, which can inform management action. However, quantifying ecosystem reference points has proven elusive due in large part to the multi-dimensional nature of both ecosystem pressures and ecosystem responses. We used ecological indicators, synthetic measures of ecosystem status and functioning, to enumerate important ecosystem attributes and to reduce the complexity of the Northeast Shelf Large Marine Ecosystem (NES LME). Random forests were used to quantify the importance of four environmental and four anthropogenic pressure variables to the value of ecological indicators, and to quantify shifts in aggregate ecological indicator response along pressure gradients. Anthropogenic pressure variables were critical defining features and were able to predict an average of 8-13% (up to 25-66% for individual ecological indicators) of the variation in ecological indicator values, whereas environmental pressures were able to predict an average of 1-5 % (up to 9-26% for individual ecological indicators) of ecological indicator variation. Each pressure variable predicted a different suite of ecological indicator's variation and the shapes of ecological indicator responses along pressure gradients were generally nonlinear. Threshold shifts in ecosystem response to exploitation, the most important pressure variable, occurred when commercial landings were 20 and 60% of total surveyed biomass. Although present, threshold shifts in ecosystem response to environmental pressures were much less important, which suggests that anthropogenic pressures have significantly altered the ecosystem structure and functioning of the NES LME. Gradient response curves provide ecologically informed transformations of pressure variables to explain patterns of ecosystem structure and functioning. By concurrently identifying thresholds for a suite of ecological indicator responses to multiple pressures, we demonstrate that ecosystem reference points can be evaluated and used to support ecosystem-based management.
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Thermal sensitivity and cardiovascular reactivity to stress in healthy males.
Conde-Guzón, Pablo Antonio; Bartolomé-Albistegui, María Teresa; Quirós, Pilar; Cabestrero, Raúl
2011-11-01
This paper examines the association of cardiovascular reactivity with thermal thresholds (detection and unpleasantness). Heart period (HP), systolic (SBP) and diastolic (DBP) blood pressure of 42 health young males were recorded during a cardiovascular reactivity task (a videogame based upon Sidman's avoidance paradigm). Thermal sensitivity, assessing detection and unpleasantness thresholds with radiant heat in the forearm was also estimated for participants. Participants with differential scores in the cardiovascular variables from base line to task > or = P65 were considered as reactors and those how have differential scores < or = P35 were considered as non-reactors. Significant differences were observed between groups in the unpleasantness thresholds in blood pressure (BP) but not in HP. Reactors exhibited significant higher unpleasantness thresholds than non-reactors. No significant differences were obtained in detection thresholds between groups.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
NASA Astrophysics Data System (ADS)
Jakob, Matthias; Weatherly, Hamish
2003-09-01
Landslides triggered by rainfall are the cause of thousands of deaths worldwide every year. One possible approach to limit the socioeconomic consequences of such events is the development of climatic thresholds for landslide initiation. In this paper, we propose a method that incorporates antecedent rainfall and streamflow data to develop a landslide initiation threshold for the North Shore Mountains of Vancouver, British Columbia. Hydroclimatic data were gathered for 18 storms that triggered landslides and 18 storms that did not. Discriminant function analysis separated the landslide-triggering storms from those storms that did not trigger landslides and selected the most meaningful variables that allow this separation. Discriminant functions were also developed for the landslide-triggering and nonlandslide-triggering storms. The difference of the discriminant scores, ΔCS, for both groups is a measure of landslide susceptibility during a storm. The variables identified that optimize the separation of the two storm groups are 4-week rainfall prior to a significant storm, 6-h rainfall during a storm, and the number of hours 1 m 3/s discharge was exceeded at Mackay Creek during a storm. Three thresholds were identified. The Landslide Warning Threshold (LWT) is reached when ΔCS is -1. The Conditional Landslide Initiation Threshold (CTL I) is reached when ΔCS is zero, and it implies that landslides are likely if 4 mm/h rainfall intensity is exceeded at which point the Imminent Landslide Initiation Threshold (ITL I) is reached. The LWT allows time for the issuance of a landslide advisory and to move personnel out of hazardous areas. The methodology proposed in this paper can be transferred to other regions worldwide where type and quality of data are appropriate for this type of analysis.
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
Alonso-Coello, Pablo; Montori, Victor M; Díaz, M Gloria; Devereaux, Philip J; Mas, Gemma; Diez, Ana I; Solà, Ivan; Roura, Mercè; Souto, Juan C; Oliver, Sven; Ruiz, Rafael; Coll-Vinent, Blanca; Gich, Ignasi; Schünemann, Holger J; Guyatt, Gordon
2015-12-01
Exploration of values and preferences in the context of anticoagulation therapy for atrial fibrillation (AF) remains limited. To better characterize the distribution of patient and physician values and preferences relevant to decisions regarding anticoagulation in patients with AF, we conducted interviews with patients at risk of developing AF and physicians who manage patients with AF. We interviewed 96 outpatients and 96 physicians in a multicenter study and elicited the maximal increased risk of bleeding (threshold risk) that respondents would tolerate with warfarin vs. aspirin to achieve a reduction in three strokes in 100 patients over a 2-year period. We used the probabilistic version of the threshold technique. The median threshold risk for both patients and physicians was 10 additional bleeds (10 P = 0.7). In both groups, we observed large variability in the threshold number of bleeds, with wider variability in patients than clinicians [patient range: 0-100, physician range: 0-50]. We observed one cluster of patients and physicians who would tolerate <10 bleeds and another cluster of patients, but not physicians, who would accept more than 35. Our findings suggest wide variability in patient and physician values and preferences regarding the trade-off between strokes and bleeds. Results suggest that in individual decision making, physician and patient values and preferences will often be discordant; this mandates tailoring treatment to the individual patient's preferences. © 2014 John Wiley & Sons Ltd.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Tourism development and economic growth a nonlinear approach
NASA Astrophysics Data System (ADS)
Po, Wan-Chen; Huang, Bwo-Nung
2008-09-01
We use cross sectional data (1995-2005 yearly averages) for 88 countries to investigate the nonlinear relationship between tourism development and economic growth when a threshold variable is used. The degree of tourism specialization ( qi, defined as receipts from international tourism as a percentage of GDP) is used as the threshold variable. The results of the tests for nonlinearity indicate that the 88 countries’ data should be separated into three different groups or regimes to analyze the tourism-growth nexus. The results of the threshold regression show that when the qi is below 4.0488% (regime 1, 57 countries) or above 4.7337% (regime 3, 23 countries), there exists a significantly positive relationship between tourism growth and economic growth. However, when the qi is above 4.0488% and below 4.7337% (regime 2, 8 countries), we are unable to find evidence of such a significant relationship. Further in-depth analysis reveals that relatively low ratios of the value added of the service industry to GDP, and the forested area per country area are able to explain why we are unable to find a significant relationship between these two variables in regime 2’s countries.
Visualising inter-subject variability in fMRI using threshold-weighted overlap maps
NASA Astrophysics Data System (ADS)
Seghier, Mohamed L.; Price, Cathy J.
2016-02-01
Functional neuroimaging studies are revealing the neural systems sustaining many sensory, motor and cognitive abilities. A proper understanding of these systems requires an appreciation of the degree to which they vary across subjects. Some sources of inter-subject variability might be easy to measure (demographics, behavioural scores, or experimental factors), while others are more difficult (cognitive strategies, learning effects, and other hidden sources). Here, we introduce a simple way of visualising whole-brain consistency and variability in brain responses across subjects using threshold-weighted voxel-based overlap maps. The output quantifies the proportion of subjects activating a particular voxel or region over a wide range of statistical thresholds. The sensitivity of our approach was assessed in 30 healthy adults performing a matching task with their dominant hand. We show how overlap maps revealed many effects that were only present in a subsample of our group; we discuss how overlap maps can provide information that may be missed or misrepresented by standard group analysis, and how this information can help users to understand their data. In particular, we emphasize that functional overlap maps can be particularly useful when it comes to explaining typical (or atypical) compensatory mechanisms used by patients following brain damage.
Laboratory test variables useful for distinguishing upper from lower gastrointestinal bleeding.
Tomizawa, Minoru; Shinozaki, Fuminobu; Hasegawa, Rumiko; Shirai, Yoshinori; Motoyoshi, Yasufumi; Sugiyama, Takao; Yamamoto, Shigenori; Ishige, Naoki
2015-05-28
To distinguish upper from lower gastrointestinal (GI) bleeding. Patient records between April 2011 and March 2014 were analyzed retrospectively (3296 upper endoscopy, and 1520 colonoscopy). Seventy-six patients had upper GI bleeding (Upper group) and 65 had lower GI bleeding (Lower group). Variables were compared between the groups using one-way analysis of variance. Logistic regression was performed to identify variables significantly associated with the diagnosis of upper vs lower GI bleeding. Receiver-operator characteristic (ROC) analysis was performed to determine the threshold value that could distinguish upper from lower GI bleeding. Hemoglobin (P = 0.023), total protein (P = 0.0002), and lactate dehydrogenase (P = 0.009) were significantly lower in the Upper group than in the Lower group. Blood urea nitrogen (BUN) was higher in the Upper group than in the Lower group (P = 0.0065). Logistic regression analysis revealed that BUN was most strongly associated with the diagnosis of upper vs lower GI bleeding. ROC analysis revealed a threshold BUN value of 21.0 mg/dL, with a specificity of 93.0%. The threshold BUN value for distinguishing upper from lower GI bleeding was 21.0 mg/dL.
Laboratory test variables useful for distinguishing upper from lower gastrointestinal bleeding
Tomizawa, Minoru; Shinozaki, Fuminobu; Hasegawa, Rumiko; Shirai, Yoshinori; Motoyoshi, Yasufumi; Sugiyama, Takao; Yamamoto, Shigenori; Ishige, Naoki
2015-01-01
AIM: To distinguish upper from lower gastrointestinal (GI) bleeding. METHODS: Patient records between April 2011 and March 2014 were analyzed retrospectively (3296 upper endoscopy, and 1520 colonoscopy). Seventy-six patients had upper GI bleeding (Upper group) and 65 had lower GI bleeding (Lower group). Variables were compared between the groups using one-way analysis of variance. Logistic regression was performed to identify variables significantly associated with the diagnosis of upper vs lower GI bleeding. Receiver-operator characteristic (ROC) analysis was performed to determine the threshold value that could distinguish upper from lower GI bleeding. RESULTS: Hemoglobin (P = 0.023), total protein (P = 0.0002), and lactate dehydrogenase (P = 0.009) were significantly lower in the Upper group than in the Lower group. Blood urea nitrogen (BUN) was higher in the Upper group than in the Lower group (P = 0.0065). Logistic regression analysis revealed that BUN was most strongly associated with the diagnosis of upper vs lower GI bleeding. ROC analysis revealed a threshold BUN value of 21.0 mg/dL, with a specificity of 93.0%. CONCLUSION: The threshold BUN value for distinguishing upper from lower GI bleeding was 21.0 mg/dL. PMID:26034359
Experimental evidence of a pathogen invasion threshold
Krkošek, Martin
2018-01-01
Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876
Son, Jaebum; Ashton-Miller, James A.; Richardson, James K.
2010-01-01
Objective To determine whether ankle orthoses that provide medial and lateral support, and have been found to decrease gait variability in older persons with peripheral neuropathy, decrease (improve) frontal plane ankle proprioceptive thresholds or increase unipedal stance time in that same population. Design Observational study in which unipedal stance time was determined with a stopwatch, and frontal plane ankle (inversion and eversion) proprioceptive thresholds were quantified during bipedal stance with and without the ankle orthoses, in 11 older diabetic subjects with peripheral neuropathy (8 men; age 72 ± 7.1 years) using a foot cradle system which presented a series of 100 rotational stimuli. Results The subjects demonstrated no change in combined frontal plane (inversion + eversion) proprioceptive thresholds or unipedal stance time with versus without the orthoses (1.06 ± 0.56 versus 1.13 ± 0.39 degrees, respectively; p = 0.955 and 6.1 ± 6.5 versus 6.2 ± 5.4 seconds, respectively; p = 0.922). Conclusion Ankle orthoses which provide medial-lateral support do not appear to change ankle inversion/eversion proprioceptive thresholds or unipedal stance time in older persons with diabetic peripheral neuropathy. Previously identified improvements in gait variability using orthoses in this population are therefore likely related to an orthotically-induced stiffening of the ankle rather than a change in ankle afferent function. PMID:20407302
ERIC Educational Resources Information Center
Kreiter, Clarence D.
2007-01-01
The academic performance consequences of relying solely on non-cognitive factors for selecting applicants above a GPA and MCAT threshold have not been fully considered in the literature. This commentary considers the impact of using a "threshold approach" on academic performance as assessed with the USMLE Step 1.
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Behavior of motor units in human biceps brachii during a submaximal fatiguing contraction.
Garland, S J; Enoka, R M; Serrano, L P; Robinson, G A
1994-06-01
The activity of 50 single motor units was recorded in the biceps brachii muscle of human subjects while they performed submaximal isometric elbow flexion contractions that were sustained to induce fatigue. The purposes of this study were to examine the influence of fatigue on motor unit threshold force and to determine the relationship between the threshold force of recruitment and the initial interimpulse interval on the discharge rates of single motor units during a fatiguing contraction. The discharge rate of most motor units that were active from the beginning of the contraction declined during the fatiguing contraction, whereas the discharge rates of most newly recruited units were either constant or increased slightly. The absolute threshold forces of recruitment and derecruitment decreased, and the variability of interimpulse intervals increased after the fatigue task. The change in motor unit discharge rate during the fatigue task was related to the initial rate, but the direction of the change in discharge rate could not be predicted from the threshold force of recruitment or the variability in the interimpulse intervals. The discharge rate of most motor units declined despite an increase in the excitatory drive to the motoneuron pool during the fatigue task.
A geographic analysis of population density thresholds in the influenza pandemic of 1918-19.
Chandra, Siddharth; Kassens-Noor, Eva; Kuljanin, Goran; Vertalka, Joshua
2013-02-20
Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918-19 in India, where over 15 million people died in the short span of less than one year. Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918-19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold.
A geographic analysis of population density thresholds in the influenza pandemic of 1918–19
2013-01-01
Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498
Factors Influencing the Incidence of Obesity in Australia: A Generalized Ordered Probit Model.
Avsar, Gulay; Ham, Roger; Tannous, W Kathy
2017-02-10
The increasing health costs of and the risks factors associated with obesity are well documented. From this perspective, it is important that the propensity of individuals towards obesity is analyzed. This paper uses longitudinal data from the Household Income and Labour Dynamics in Australia (HILDA) Survey for 2005 to 2010 to model those variables which condition the probability of being obese. The model estimated is a random effects generalized ordered probit, which exploits two sources of heterogeneity; the individual heterogeneity of panel data models and heterogeneity across body mass index (BMI) categories. The latter is associated with non-parallel thresholds in the generalized ordered model, where the thresholds are functions of the conditioning variables, which comprise economic, social, and demographic and lifestyle variables. To control for potential predisposition to obesity, personality traits augment the empirical model. The results support the view that the probability of obesity is significantly determined by the conditioning variables. Particularly, personality is found to be important and these outcomes reinforce other work examining personality and obesity.
Variability and Order in Cytoskeletal Dynamics of Motile Amoeboid Cells
NASA Astrophysics Data System (ADS)
Hsu, Hsin-Fang; Bodenschatz, Eberhard; Westendorf, Christian; Gholami, Azam; Pumir, Alain; Tarantola, Marco; Beta, Carsten
2017-10-01
The chemotactic motion of eukaryotic cells such as leukocytes or metastatic cancer cells relies on membrane protrusions driven by the polymerization and depolymerization of actin. Here we show that the response of the actin system to a receptor stimulus is subject to a threshold value that varies strongly from cell to cell. Above the threshold, we observe pronounced cell-to-cell variability in the response amplitude. The polymerization time, however, is almost constant over the entire range of response amplitudes, while the depolymerization time increases with increasing amplitude. We show that cell-to-cell variability in the response amplitude correlates with the amount of Arp2 /3 , a protein that enhances actin polymerization. A time-delayed feedback model for the cortical actin concentration is consistent with all our observations and confirms the role of Arp2 /3 in the observed cell-to-cell variability. Taken together, our observations highlight robust regulation of the actin response that enables a reliable timing of cell movement.
Bierer, Julie Arenberg; Faulkner, Kathleen F
2010-04-01
The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, and tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve (PTC) measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar (pTP) electrode configuration are predictive of wide or tip-shifted PTCs. Data were collected from five cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp., Sylmar, CA). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the pTP configuration for which a fraction of current (sigma) from a center-active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked PTCs were obtained for channels with the highest, lowest, and median tripolar (sigma = 1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (sigma = 0) or a more focused pTP (sigma > or = 0.55) configuration. The masker channel and level were varied, whereas the configuration was fixed to sigma = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, sigma, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, pTP stimulus, had significantly broader PTCs than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with pTP probes for both the highest and lowest threshold channels. These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
Going wireless and booth-less for hearing testing in industry.
Meinke, Deanna K; Norris, Jesse A; Flynn, Brendan P; Clavier, Odile H
2017-01-01
To assess the test-retest variability of hearing thresholds obtained with an innovative, mobile wireless automated hearing-test system (WAHTS) with enhanced sound attenuation to test industrial workers at a worksite as compared to standardised automated hearing thresholds obtained in a mobile trailer sound booth. A within-subject repeated-measures design was used to compare air-conducted threshold tests (500-8000 Hz) measured with the WAHTS in six workplace locations, and a third test using computer-controlled audiometry obtained in a mobile trailer sound booth. Ambient noise levels were measured in all test environments. Twenty workers served as listeners and 20 workers served as operators. On average, the WAHTS resulted in equivalent thresholds as the mobile trailer audiometry at 1000, 2000, 3000 and 8000 Hz and thresholds were within ±5 dB at 500, 4000 and 6000 Hz. Comparable performance may be obtained with the WAHTS in occupational audiometry and valid thresholds may be obtained in diverse test locations without the use of sound-attenuating enclosures.
Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade
2008-07-07
Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.
NASA Astrophysics Data System (ADS)
Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.
2006-07-01
In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.
Environment and host as large-scale controls of ectomycorrhizal fungi.
van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I
2018-06-06
Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.
Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges
Mellin, C.; Mouillot, D.; Kulbicki, M.; McClanahan, T. R.; Vigliola, L.; Bradshaw, C. J. A.; Brainard, R. E.; Chabanet, P.; Edgar, G. J.; Fordham, D. A.; Friedlander, A. M.; Parravicini, V.; Sequeira, A. M. M.; Stuart-Smith, R. D.; Wantiez, L.; Caley, M. J.
2016-01-01
Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions. PMID:26839155
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Liyanage, Ganesha S; Ayre, David J; Ooi, Mark K J
2016-11-01
The production of morphologically different seeds or fruits by the same individual plant is known as seed heteromorphism. Such variation is expected to be selected for in disturbance-prone environments to allow germination into inherently variable regeneration niches. However, there are few demonstrations that heteromorphic seed characteristics should be favored by selection or how they may be maintained. In fire-prone ecosystems, seed heteromorphism is found in the temperatures needed to break physical dormancy, with seeds responding to high or low temperatures, ensuring emergence under variable fire-regime-related soil heating. Because of the relationship between dormancy-breaking temperature thresholds and fire severity, we hypothesize that different post-fire resource conditions have selected for covarying seedling traits, which contribute to maintenance of such heteromorphism. Seeds with low thresholds emerge into competitive conditions, either after low-severity fire or in vegetation gaps, and are therefore likely to experience selection for seedling characteristics that make them good competitors. On the other hand, high-temperature-threshold seeds would emerge into less competitive environments, indicative of stand-clearing high-severity fires, and would not experience the same selective forces. We identified high and low-threshold seed morphs via dormancy-breaking heat treatments and germination trials for two study species and compared seed mass and other morphological characteristics between morphs. We then grew seedlings from the two different morphs, with and without competition, and measured growth and biomass allocation as indicators of seedling performance. Seedlings from low-threshold seeds of both species performed better than their high-threshold counterparts, growing more quickly under competitive conditions, confirming that different performance can result from this seed characteristic. Seed mass or appearance did not differ between morphs, indicating that dormancy-breaking temperature threshold variation is a form of cryptic heteromorphism. The potential shown for the selective influence of different post-fire environmental conditions on seedling performance provides evidence of a mechanism for the maintenance of heteromorphic variation in dormancy-breaking temperature thresholds. © 2016 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Hammond, W.; Yu, K.; Wilson, L. A.; Will, R.; Anderegg, W.; Adams, H. D.
2017-12-01
The strength of the terrestrial carbon sink—dominated by forests—remains one of the greatest uncertainties in climate change modelling. How forests will respond to increased variability in temperature and precipitation is poorly understood, and experimental study to better inform global vegetation models in this area is needed. Necessary for achieving this goal is an understanding of how increased temperatures and drought will affect landscape level distributions of plant species. Quantifying physiological thresholds representing a point of no return from drought stress, including thresholds in hydraulic function, is critical to this end. Recent theoretical, observational, and modelling research has converged upon a threshold of 60 percent loss of hydraulic conductivity at mortality (PLClethal). However, direct experimental determination of lethal points in conductivity and cavitation during drought is lacking. We quantified thresholds in hydraulic function in Loblolly pine, Pinus taeda, a commercially important timber species. In a greenhouse experiment, we exposed saplings (n = 96 total) to drought and rewatered treatment groups at variable levels of increasing water stress determined by pre-selected targets in pre-dawn water potential. Treatments also included a watered control with no drought, and drought with no rewatering. We measured physiological responses to water stress, including hydraulic conductivity, native PLC, water potential, foliar color, canopy die-back, and dark-adapted chlorophyll fluorescence. Following the rewatering treatment, we observed saplings for at least two months to determine which survived and which died. Using these data we calculated lethal physiological thresholds in water potential, directly measured PLC, and PLC inferred from water potential using a hydraulic vulnerability curve. We found that PLClethal inferred from water potential agreed with the 60% threshold suggested by previous research. However, directly measured PLC supported a much higher threshold. Beyond PLClethal, some trees survived by basal and epicormic re-sprouting, despite complete top-kill of existing foliage. Additional empirical study of multiple species to represent functional groups is needed to provide lethal thresholds for models presently in development.
Seluianov, V N; Kalinin, E M; Pak, G D; Maevskaia, V I; Konrad, A H
2011-01-01
The aim of this work is to develop methods for determining the anaerobic threshold according to the rate of ventilation and cardio interval variability during the test with stepwise increases load on the cycle ergometer and treadmill. In the first phase developed the method for determining the anaerobic threshold for lung ventilation. 49 highly skilled skiers took part in the experiment. They performed a treadmill ski-walking test with sticks with gradually increasing slope from 0 to 25 degrees, the slope increased by one degree every minute. In the second phase we developed a method for determining the anaerobic threshold according dynamics ofcardio interval variability during the test. The study included 86 athletes of different sports specialties who performed pedaling on the cycle ergometer "Monarch" in advance. Initial output was 25 W, power increased by 25 W every 2 min. The pace was steady--75 rev/min. Measurement of pulmonary ventilation and oxygen and carbon dioxide content was performed using gas analyzer COSMED K4. Sampling of arterial blood was carried from the ear lobe or finger, blood lactate concentration was determined using an "Akusport" instrument. RR-intervals registration was performed using heart rate monitor Polar s810i. As a result, it was shown that the graphical method for determining the onset of anaerobic threshold ventilation (VAnP) coincides with the accumulation of blood lactate 3.8 +/- 0.1 mmol/l when testing on a treadmill and 4.1 +/- 0.6 mmol/1 on the cycle ergometer. The connection between the measure of oxygen consumption at VAnP and the dispersion of cardio intervals (SD1), derived regression equation: VO2AnT = 0.35 + 0.01SD1W + 0.0016SD1HR + + 0.106SD1(ms), l/min; (R = 0.98, error evaluation function 0.26 L/min, p < 0.001), where W (W)--Power, HR--heart rate (beats/min), SD1--cardio intervals dispersion (ms) at the moment of registration of cardio interval threshold.
NASA Astrophysics Data System (ADS)
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
A sex-specific relationship between capillary density and anaerobic threshold
Robbins, Jennifer L.; Duscha, Brian D.; Bensimhon, Daniel R.; Wasserman, Karlman; Hansen, James E.; Houmard, Joseph A.; Annex, Brian H.; Kraus, William E.
2009-01-01
Although both capillary density and peak oxygen consumption (V̇o2) improve with exercise training, it is difficult to find a relationship between these two measures. It has been suggested that peak V̇o2 may be more related to central hemodynamics than to the oxidative potential of skeletal muscle, which may account for this observation. We hypothesized that change in a measure of submaximal performance, anaerobic threshold, might be related to change in skeletal muscle capillary density, a marker of oxidative potential in muscle, with training. Due to baseline differences among these variables, we also hypothesized that relationships might be sex specific. A group of 21 subjects completed an inactive control period, whereas 28 subjects (17 men and 11 women) participated in a 6-mo high-intensity exercise program. All subjects were sedentary, overweight, and dyslipidemic. Potential relationships were assessed between change in capillary density with both change in V̇o2 at peak and at anaerobic threshold with exercise training. All variables and relationships were assessed for sex-specific effects. Change in peak V̇o2 was not related to change in capillary density after exercise training in either sex. Men had a positive correlation between change in V̇o2 at anaerobic threshold and change in capillary density with exercise training (r = 0.635; P < 0.01), whereas women had an inverse relationship (r = −0.636; P < 0.05) between the change in these variables. These findings suggest that, although enhanced capillary density is associated with training-induced improvements in submaximal performance in men, this relationship is different in women. PMID:19164774
A sex-specific relationship between capillary density and anaerobic threshold.
Robbins, Jennifer L; Duscha, Brian D; Bensimhon, Daniel R; Wasserman, Karlman; Hansen, James E; Houmard, Joseph A; Annex, Brian H; Kraus, William E
2009-04-01
Although both capillary density and peak oxygen consumption (Vo(2)) improve with exercise training, it is difficult to find a relationship between these two measures. It has been suggested that peak Vo(2) may be more related to central hemodynamics than to the oxidative potential of skeletal muscle, which may account for this observation. We hypothesized that change in a measure of submaximal performance, anaerobic threshold, might be related to change in skeletal muscle capillary density, a marker of oxidative potential in muscle, with training. Due to baseline differences among these variables, we also hypothesized that relationships might be sex specific. A group of 21 subjects completed an inactive control period, whereas 28 subjects (17 men and 11 women) participated in a 6-mo high-intensity exercise program. All subjects were sedentary, overweight, and dyslipidemic. Potential relationships were assessed between change in capillary density with both change in Vo(2) at peak and at anaerobic threshold with exercise training. All variables and relationships were assessed for sex-specific effects. Change in peak Vo(2) was not related to change in capillary density after exercise training in either sex. Men had a positive correlation between change in Vo(2) at anaerobic threshold and change in capillary density with exercise training (r = 0.635; P < 0.01), whereas women had an inverse relationship (r = -0.636; P < 0.05) between the change in these variables. These findings suggest that, although enhanced capillary density is associated with training-induced improvements in submaximal performance in men, this relationship is different in women.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants.
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors nonspecific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
Li, Yangfan; Li, Yi; Wu, Wei
2016-01-01
The concept of thresholds shows important implications for environmental and resource management. Here we derived potential landscape thresholds which indicated abrupt changes in water quality or the dividing points between exceeding and failing to meet national surface water quality standards for a rapidly urbanizing city on the Eastern Coast in China. The analysis of landscape thresholds was based on regression models linking each of the seven water quality variables to each of the six landscape metrics for this coupled land-water system. We found substantial and accelerating urban sprawl at the suburban areas between 2000 and 2008, and detected significant nonlinear relations between water quality and landscape pattern. This research demonstrated that a simple modeling technique could provide insights on environmental thresholds to support more-informed decision making in land use, water environmental and resilience management. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brown, James; Seo, Dong-Jun
2010-05-01
Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.
Synchronous temperature rate control for refrigeration with reduced energy consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomes, Alberto Regio; Keres, Stephen L.; Kuehl, Steven J.
Methods of operation for refrigerator appliance configurations with a controller, a condenser, at least one evaporator, a compressor, and two refrigeration compartments. The configuration may be equipped with a variable-speed or variable-capacity compressor, variable speed evaporator or compartment fans, a damper, and/or a dual-temperature evaporator with a valve system to control flow of refrigerant through one or more pressure reduction devices. The methods may include synchronizing alternating cycles of cooling each compartment to a temperature approximately equal to the compartment set point temperature by operation of the compressor, fans, damper and/or valve system. The methods may also include controlling themore » cooling rate in one or both compartments. Refrigeration compartment cooling may begin at an interval before or after when the freezer compartment reaches its lower threshold temperature. Freezer compartment cooling may begin at an interval before or after when the freezer compartment reaches its upper threshold temperature.« less
Synchronous temperature rate control for refrigeration with reduced energy consumption
Gomes, Alberto Regio; Keres, Stephen L.; Kuehl, Steven J.; Litch, Andrew D.; Richmond, Peter J.; Wu, Guolian
2015-09-22
Methods of operation for refrigerator appliance configurations with a controller, a condenser, at least one evaporator, a compressor, and two refrigeration compartments. The configuration may be equipped with a variable-speed or variable-capacity compressor, variable speed evaporator or compartment fans, a damper, and/or a dual-temperature evaporator with a valve system to control flow of refrigerant through one or more pressure reduction devices. The methods may include synchronizing alternating cycles of cooling each compartment to a temperature approximately equal to the compartment set point temperature by operation of the compressor, fans, damper and/or valve system. The methods may also include controlling the cooling rate in one or both compartments. Refrigeration compartment cooling may begin at an interval before or after when the freezer compartment reaches its lower threshold temperature. Freezer compartment cooling may begin at an interval before or after when the freezer compartment reaches its upper threshold temperature.
Heart rate variability and pain: associations of two interrelated homeostatic processes.
Appelhans, Bradley M; Luecken, Linda J
2008-02-01
Between-person variability in pain sensitivity remains poorly understood. Given a conceptualization of pain as a homeostatic emotion, we hypothesized inverse associations between measures of resting heart rate variability (HRV), an index of autonomic regulation of heart rate that has been linked to emotionality, and sensitivity to subsequently administered thermal pain. Resting electrocardiography was collected, and frequency-domain measures of HRV were derived through spectral analysis. Fifty-nine right-handed participants provided ratings of pain intensity and unpleasantness following exposure to 4 degrees C thermal pain stimulation, and indicated their thresholds for barely noticeable and moderate pain during three exposures to decreasing temperature. Greater low-frequency HRV was associated with lower ratings of 4 degrees C pain unpleasantness and higher thresholds for barely noticeable and moderate pain. High-frequency HRV was unrelated to measures of pain sensitivity. Findings suggest pain sensitivity is influenced by characteristics of a central homeostatic system also involved in emotion.
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Continuous-variable teleportation of a negative Wigner function
NASA Astrophysics Data System (ADS)
Mišta, Ladislav, Jr.; Filip, Radim; Furusawa, Akira
2010-07-01
Teleportation is a basic primitive for quantum communication and quantum computing. We address the problem of continuous-variable (unconditional and conditional) teleportation of a pure single-photon state and a mixed attenuated single-photon state generally in a nonunity-gain regime. Our figure of merit is the maximum negativity of the Wigner function, which demonstrates a highly nonclassical feature of the teleported state. We find that the negativity of the Wigner function of the single-photon state can be unconditionally teleported for an arbitrarily weak squeezed state used to create the entangled state shared in teleportation. In contrast, for the attenuated single-photon state there is a strict threshold squeezing one has to surpass to successfully teleport the negativity of its Wigner function. The conditional teleportation allows one to approach perfect transmission of the single photon for an arbitrarily low squeezing at a cost of decrease of the success rate. In contrast, for the attenuated single photon state, conditional teleportation cannot overcome the squeezing threshold of the unconditional teleportation and it approaches negativity of the input state only if the squeezing increases simultaneously. However, as soon as the threshold squeezing is surpassed, conditional teleportation still pronouncedly outperforms the unconditional one. The main consequences for quantum communication and quantum computing with continuous variables are discussed.
Continuous-variable teleportation of a negative Wigner function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mista, Ladislav Jr.; Filip, Radim; Furusawa, Akira
2010-07-15
Teleportation is a basic primitive for quantum communication and quantum computing. We address the problem of continuous-variable (unconditional and conditional) teleportation of a pure single-photon state and a mixed attenuated single-photon state generally in a nonunity-gain regime. Our figure of merit is the maximum negativity of the Wigner function, which demonstrates a highly nonclassical feature of the teleported state. We find that the negativity of the Wigner function of the single-photon state can be unconditionally teleported for an arbitrarily weak squeezed state used to create the entangled state shared in teleportation. In contrast, for the attenuated single-photon state there ismore » a strict threshold squeezing one has to surpass to successfully teleport the negativity of its Wigner function. The conditional teleportation allows one to approach perfect transmission of the single photon for an arbitrarily low squeezing at a cost of decrease of the success rate. In contrast, for the attenuated single photon state, conditional teleportation cannot overcome the squeezing threshold of the unconditional teleportation and it approaches negativity of the input state only if the squeezing increases simultaneously. However, as soon as the threshold squeezing is surpassed, conditional teleportation still pronouncedly outperforms the unconditional one. The main consequences for quantum communication and quantum computing with continuous variables are discussed.« less
Effects of Soil Moisture Thresholds in Runoff Generation in two nested gauged basins
NASA Astrophysics Data System (ADS)
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.; Margiotta, M. R.; Onorati, B.; Rivelli, A. R.; Sole, A.
2009-04-01
Regarding catchment response to intense storm events, while the relevance of antecedent soil moisture conditions is generally recognized, the role and the quantification of runoff thresholds is still uncertain. Among others, Grayson et al. (1997) argue that above a wetness threshold a substantial portion of a small basin acts in unison and contributes to the runoff production. Investigations were conducted through an experimental approach and in particular exploiting the hydrological data monitored on "Fiumarella of Corleto" catchment (Southern Italy). The field instrumentation ensures continuous monitoring of all fundamental hydrological variables: climate forcing, streamflow and soil moisture. The experimental basin is equipped with two water level installations used to measure the hydrological response of the entire basin (with an area of 32 km2) and of a subcatchment of 0.65 km2. The aim of the present research is to better understand the dynamics of soil moisture and the runoff generation during flood events, comparing the data recorded in the transect and the runoff at the two different scales. Particular attention was paid to the influence of the soil moisture content on runoff activation mechanisms. We found that, the threshold value, responsible of runoff activation, is equal or almost to field capacity. In fact, we observed a rapid change in the subcatchment response when the mean soil moisture reaches a value close to the range of variability of the field capacity measured along a monitored transect of the small subcatchment. During dry periods the runoff coefficient is almost zero for each of the events recorded. During wet periods, however, it is rather variable and depends almost only on the total rainfall. Changing from the small scale (0.65 km2) up to the medium scale (represented by the basin of 32 km2) the threshold mechanism in runoff production is less detectable because masked by the increased spatial heterogeneity of the vegetation cover and soil texture.
Gutzwiller, Kevin J.; Barrow, Wylie C.; White, Joseph D.; Johnson-Randall, Lori; Cade, Brian S.; Zygo, Lisa M.
2010-01-01
1. Organism–environment models are used widely in conservation. The degree to which they are useful for informing conservation decisions – the conservation relevance of these relations – is important because lack of relevance may lead to misapplication of scarce conservation resources or failure to resolve important conservation dilemmas. Even when models perform well based on model fit and predictive ability, conservation relevance of associations may not be clear without also knowing the magnitude and variability of predicted changes in response variables. 2. We introduce a method for evaluating the conservation relevance of organism–environment relations that employs confidence intervals for predicted changes in response variables. The confidence intervals are compared to a preselected magnitude of change that marks a threshold (trigger) for conservation action. To demonstrate the approach, we used a case study from the Chihuahuan Desert involving relations between avian richness and broad-scale patterns of shrubland. We considered relations for three winters and two spatial extents (1- and 2-km-radius areas) and compared predicted changes in richness to three thresholds (10%, 20% and 30% change). For each threshold, we examined 48 relations. 3. The method identified seven, four and zero conservation-relevant changes in mean richness for the 10%, 20% and 30% thresholds respectively. These changes were associated with major (20%) changes in shrubland cover, mean patch size, the coefficient of variation for patch size, or edge density but not with major changes in shrubland patch density. The relative rarity of conservation-relevant changes indicated that, overall, the relations had little practical value for informing conservation decisions about avian richness. 4. The approach we illustrate is appropriate for various response and predictor variables measured at any temporal or spatial scale. The method is broadly applicable across ecological environments, conservation objectives, types of statistical predictive models and levels of biological organization. By focusing on magnitudes of change that have practical significance, and by using the span of confidence intervals to incorporate uncertainty of predicted changes, the method can be used to help improve the effectiveness of conservation efforts.
Ventilatory thresholds determined from HRV: comparison of 2 methods in obese adolescents.
Quinart, S; Mourot, L; Nègre, V; Simon-Rigaud, M-L; Nicolet-Guénat, M; Bertrand, A-M; Meneveau, N; Mougin, F
2014-03-01
The development of personalised training programmes is crucial in the management of obesity. We evaluated the ability of 2 heart rate variability analyses to determine ventilatory thresholds (VT) in obese adolescents. 20 adolescents (mean age 14.3±1.6 years and body mass index z-score 4.2±0.1) performed an incremental test to exhaustion before and after a 9-month multidisciplinary management programme. The first (VT1) and second (VT2) ventilatory thresholds were identified by the reference method (gas exchanges). We recorded RR intervals to estimate VT1 and VT2 from heart rate variability using time-domain analysis and time-varying spectral-domain analysis. The coefficient correlations between thresholds were higher with spectral-domain analysis compared to time-domain analysis: Heart rate at VT1: r=0.91 vs. =0.66 and VT2: r=0.91 vs. =0.66; power at VT1: r=0.91 vs. =0.74 and VT2: r=0.93 vs. =0.78; spectral-domain vs. time-domain analysis respectively). No systematic bias in heart rate at VT1 and VT2 with standard deviations <6 bpm were found, confirming that spectral-domain analysis could replace the reference method for the detection of ventilatory thresholds. Furthermore, this technique is sensitive to rehabilitation and re-training, which underlines its utility in clinical practice. This inexpensive and non-invasive tool is promising for prescribing physical activity programs in obese adolescents. © Georg Thieme Verlag KG Stuttgart · New York.
Sperling, Milena P R; Simões, Rodrigo P; Caruso, Flávia C R; Mendes, Renata G; Arena, Ross; Borghi-Silva, Audrey
2016-01-01
Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
Lepais, Olivier; Manicki, Aurélie; Glise, Stéphane; Buoro, Mathieu; Bardonnet, Agnès
2017-01-01
Alternative mating tactics have important ecological and evolutionary implications and are determined by complex interactions between environmental and genetic factors. Here, we study the genetic effect and architecture of the variability in reproductive tactics among Atlantic salmon males which can either mature sexually early in life in freshwater or more commonly only after completing a migration at sea. We applied the latent environmental threshold model (LETM), which provides a conceptual framework linking individual status to a threshold controlling the decision to develop alternative traits, in an innovative experimental design using a semi-natural river which allowed for ecologically relevant phenotypic expression. Early male parr maturation rates varied greatly across families (10 to 93%) which translated into 90% [64–100%] of the phenotypic variation explained by genetic variation. Three significant QTLs were found for the maturation status, however only one collocated with a highly significant QTL explaining 20.6% of the variability of the maturation threshold located on chromosome 25 and encompassing a locus previously shown to be linked to sea age at maturity in anadromous Atlantic salmon. These results provide new empirical illustration of the relevance of the LETM for a better understanding of alternative mating tactics evolution in natural populations. PMID:28281522
Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.
Hearing threshold shifts among military pilots of the Israeli Air Force.
Kampel-Furman, Liyona; Joachims, Z; Bar-Cohen, H; Grossman, A; Frenkel-Nir, Y; Shapira, Y; Alon, E; Carmon, E; Gordon, B
2018-02-01
Military aviators are potentially at risk for developing noise-induced hearing loss. Whether ambient aircraft noise exposure causes hearing deficit beyond the changes attributed to natural ageing is debated. The aim of this research was to assess changes in hearing thresholds of Israeli Air Force (IAF) pilots over 20 years of military service and identify potential risk factors for hearing loss. A retrospective cohort analysis was conducted of pure-tone air conduction audiograms of pilots, from their recruitment at 18 years of age until the last documented medical check-up. Mean hearing thresholds were analysed in relation to age, total flight hours and aircraft platform. Comparisons were made to the hearing thresholds of air traffic controllers (ATCs) who were not exposed to the noise generated by aircraft while on duty. One hundred and sixty-three pilots were included, with flying platforms ranging from fighter jets (n=54), combat helicopters (n=27), transport helicopters (n=52) and transport aircraft (n=30). These were compared with the results from 17 ATCs. A marked notch in the frequency range of 4-6 kHz was demonstrated in the mean audiograms of all platforms pilots, progressing with ageing. Hearing threshold shifts in relation to measurements at recruitment were first noted at the age of 30 years, particularly at 4 kHz (mean shift of 2.97 dB, p=0.001). There was no statistical association between flying variables and hearing thresholds adjusted for age by logistic regression analysis. The audiometric profile of IAF pilots has a pattern compatible with noise exposure, as reflected by characteristic noise notch. However, no flight variable was associated with deterioration of hearing thresholds, and no significant difference from non-flying controls (ATCs) was seen. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Bierer, Julie Arenberg; Faulkner, Kathleen F.
2010-01-01
Objectives The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar electrode configuration are predictive of wide or tip-shifted psychophysical tuning curves. Design Data were collected from five cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked psychophysical tuning curves were obtained for channels with the highest, lowest, and median tripolar (σ=1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (σ=0) or a more focused partial tripolar (σ ≥ 0.55) configuration. The masker channel and level were varied while the configuration was fixed to σ = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Results Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, σ, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, partial tripolar stimulus, had significantly broader psychophysical tuning curves than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with partial tripolar probes, for both the highest and lowest threshold channels. Conclusions These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception. PMID:20090533
Development of a novel virtual reality gait intervention.
Boone, Anna E; Foreman, Matthew H; Engsberg, Jack R
2017-02-01
Improving gait speed and kinematics can be a time consuming and tiresome process. We hypothesize that incorporating virtual reality videogame play into variable improvement goals will improve levels of enjoyment and motivation and lead to improved gait performance. To develop a feasible, engaging, VR gait intervention for improving gait variables. Completing this investigation involved four steps: 1) identify gait variables that could be manipulated to improve gait speed and kinematics using the Microsoft Kinect and free software, 2) identify free internet videogames that could successfully manipulate the chosen gait variables, 3) experimentally evaluate the ability of the videogames and software to manipulate the gait variables, and 4) evaluate the enjoyment and motivation from a small sample of persons without disability. The Kinect sensor was able to detect stride length, cadence, and joint angles. FAAST software was able to identify predetermined gait variable thresholds and use the thresholds to play free online videogames. Videogames that involved continuous pressing of a keyboard key were found to be most appropriate for manipulating the gait variables. Five participants without disability evaluated the effectiveness for modifying the gait variables and enjoyment and motivation during play. Participants were able to modify gait variables to permit successful videogame play. Motivation and enjoyment were high. A clinically feasible and engaging virtual intervention for improving gait speed and kinematics has been developed and initially tested. It may provide an engaging avenue for achieving thousands of repetitions necessary for neural plastic changes and improved gait. Copyright © 2016 Elsevier B.V. All rights reserved.
Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
Ourso, R.T.; Frenzel, S.A.
2003-01-01
We examined biotic and physiochemical responses in urbanized Anchorage, Alaska, to the percent of impervious area within stream basins, as determined by high-resolution IKONOS satellite imagery and aerial photography. Eighteen of the 86 variables examined, including riparian and instream habitat, macroinvertebrate communities, and water/sediment chemistry, were significantly correlated with percent impervious area. Variables related to channel condition, instream substrate, water chemistry, and residential and transportation right-of-way land uses were identified by principal components analysis as significant factors separating site groups. Detrended canonical correspondence analysis indicated that the macroinvertebrate communities responded to an urbanization gradient closely paralleling the percent of impervious area within the subbasin. A sliding regression analysis of variables significantly correlated with percent impervious area revealed 8 variables exhibiting threshold responses that correspond to a mean of 4.4-5.8% impervious area, much lower than mean values reported in other, similar investigations. As contributing factors to a subbasin's impervious area, storm drains and roads appeared to be important elements influencing the degradation of water quality with respect to the biota.
The respiration pattern as an indicator of the anaerobic threshold.
Mirmohamadsadeghi, Leila; Vesin, Jean-Marc; Lemay, Mathieu; Deriaz, Olivier
2015-08-01
The anaerobic threshold (AT) is a good index of personal endurance but needs a laboratory setting to be determined. It is important to develop easy AT field measurements techniques in order to rapidly adapt training programs. In the present study, it is postulated that the variability of the respiratory parameters decreases with exercise intensity (especially at the AT level). The aim of this work was to assess, on healthy trained subjects, the putative relationships between the variability of some respiration parameters and the AT. The heart rate and respiratory variables (volume, rate) were measured during an incremental exercise performed on a treadmill by healthy moderately trained subjects. Results show a decrease in the variance of 1/tidal volume with the intensity of exercise. Consequently, the cumulated variance (sum of the variance measured at each level of the exercise) follows an exponential relationship with respect to the intensity to reach eventually a plateau. The amplitude of this plateau is closely related to the AT (r=-0.8). It is concluded that the AT is related to the variability of the respiration.
NASA Astrophysics Data System (ADS)
Van Tiel, Marit; Van Loon, Anne; Wanders, Niko; Vis, Marc; Teuling, Ryan; Stahl, Kerstin
2017-04-01
In glacierized catchments, snowpack and glaciers function as an important storage of water and hydrographs of highly glacierized catchments in mid- and high latitudes thus show a clear seasonality with low flows in winter and high flows in summer. Due to the ongoing climate change we expect this type of storage capacity to decrease with resultant consequences for the discharge regime. In this study we focus on streamflow droughts, here defined as below average water availability specifically in the high flow season, and which methods are most suitable to characterize future streamflow droughts as regimes change. Two glacierized catchments, Nigardsbreen (Norway) and Wolverine (Alaska), are used as case study and streamflow droughts are compared between two periods, 1975-2004 and 2071-2100. Streamflow is simulated with the HBV light model, calibrated on observed discharge and seasonal glacier mass balances, for two climate change scenarios (RCP 4.5 & RCP 8.5). In studies on future streamflow drought often the same variable threshold of the past has been applied to the future, but in regions where a regime shift is expected this method gives severe "droughts" in the historic high-flow period. We applied the new alternative transient variable threshold, a threshold that adapts to the changing hydrological regime and is thus better able to cope with this issue, but has never been thoroughly tested in glacierized catchments. As the glacier area representation in the hydrological modelling can also influence the modelled discharge and the derived streamflow droughts, we evaluated in this study both the difference between the historical variable threshold (HVT) and transient variable threshold (TVT) and two different glacier area conceptualisations (constant area (C) and dynamical area (D)), resulting in four scenarios: HVT-C, HVT-D, TVT-C and TVT-D. Results show a drastic decrease in the number of droughts in the HVT-C scenario due to increased glacier melt. The deficit volume is expected to be up to almost eight times larger in the future compared to the historical period (Wolverine, +674%) in the HVT-D scenario, caused by the regime shift. Using the TVT the drought characteristics between the C and D scenarios and between future and historic droughts are more similar. However, when using the TVT, causing factors of future droughts, anomalies in temperature and/or precipitation, can be analysed. This study highlights the different conclusions that may be drawn on future streamflow droughts in glacierized catchments depending on methodological choices. They could be used to answer different questions: the TVT for analysing drought processes in the future, the HVT to assess changes between historical and future periods, the constant area conceptualisation to analyse the effect of short term climate variability and the dynamical glacier area to model realistic future discharges in glacierized catchments.
Assessing the detection capability of a dense infrasound network in the southern Korean Peninsula
NASA Astrophysics Data System (ADS)
Che, Il-Young; Le Pichon, Alexis; Kim, Kwangsu; Shin, In-Cheol
2017-08-01
The Korea Infrasound Network (KIN) is a dense seismoacoustic array network consisting of eight small-aperture arrays with an average interarray spacing of ∼100 km. The processing of the KIN historical recordings over 10 yr in the 0.05-5 Hz frequency band shows that the dominant sources of signals are microbaroms and human activities. The number of detections correlates well with the seasonal and daily variability of the stratospheric wind dynamics. The quantification of the spatiotemporal variability of the KIN detection performance is simulated using a frequency-dependent semi-empirical propagation modelling technique. The average detection thresholds predicted for the region of interest by using both the KIN arrays and the International Monitoring System (IMS) infrasound station network at a given frequency of 1.6 Hz are estimated to be 5.6 and 10.0 Pa for two- and three-station coverage, respectively, which was about three times lower than the thresholds predicted by using only the IMS stations. The network performance is significantly enhanced from May to August, with detection thresholds being one order of magnitude lower than the rest of the year due to prevailing steady stratospheric winds. To validate the simulations, the amplitudes of ground-truth repeated surface mining explosions at an open-pit limestone mine were measured over a 19-month period. Focusing on the spatiotemporal variability of the stratospheric winds which control to first order where infrasound signals are expected to be detected, the predicted detectable signal amplitude at the mine and the detection capability at one KIN array located at a distance of 175 km are found to be in good agreement with the observations from the measurement campaign. The detection threshold in summer is ∼2 Pa and increases up to ∼300 Pa in winter. Compared with the low and stable thresholds in summer, the high temporal variability of the KIN performance is well predicted throughout the year. Simulations show that the performance of the global infrasound network of the IMS is significantly improved by adding KIN. This study shows the usefulness of dense regional networks to enhance detection capability in regions of interest in the context of future verification of the Comprehensive Nuclear-Test-Ban Treaty.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
NASA Technical Reports Server (NTRS)
Pankine, A. A.; Ingersoll, Andrew P.
2002-01-01
We present simulations of the interannual variability of martian global dust storms (GDSs) with a simplified low-order model (LOM) of the general circulation. The simplified model allows one to conduct computationally fast long-term simulations of the martian climate system. The LOM is constructed by Galerkin projection of a 2D (zonally averaged) general circulation model (GCM) onto a truncated set of basis functions. The resulting LOM consists of 12 coupled nonlinear ordinary differential equations describing atmospheric dynamics and dust transport within the Hadley cell. The forcing of the model is described by simplified physics based on Newtonian cooling and Rayleigh friction. The atmosphere and surface are coupled: atmospheric heating depends on the dustiness of the atmosphere, and the surface dust source depends on the strength of the atmospheric winds. Parameters of the model are tuned to fit the output of the NASA AMES GCM and the fit is generally very good. Interannual variability of GDSs is possible in the IBM, but only when stochastic forcing is added to the model. The stochastic forcing could be provided by transient weather systems or some surface process such as redistribution of the sand particles in storm generating zones on the surface. The results are sensitive to the value of the saltation threshold, which hints at a possible feedback between saltation threshold and dust storm activity. According to this hypothesis, erodable material builds up its a result of a local process, whose effect is to lower the saltation threshold until a GDS occurs. The saltation threshold adjusts its value so that dust storms are barely able to occur.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Objectives Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Design Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Results Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. Conclusions The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors non-specific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs. PMID:27787393
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smirnova, N.P.
A total exposure to x radiation induced disturbances of the central blood circulation apparatus. The disturbances were evaluated by the response of the hypothalamic region to irritation. An increased or changed reaction without variability in the excitation threshold are characteristic of hypothalamic effects of a neurohumoral nature (blood circulation and cutaneous vessel reaction). Considerable changes in irritation thresholds were found. (R.V.J.)
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, D.
2017-12-01
Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
NASA Astrophysics Data System (ADS)
Lazzaro, G.; Soulsby, C.; Tetzlaff, D.; Botter, G.
2017-03-01
Atlantic salmon is an economically and ecologically important fish species, whose survival is dependent on successful spawning in headwater rivers. Streamflow dynamics often have a strong control on spawning because fish require sufficiently high discharges to move upriver and enter spawning streams. However, these streamflow effects are modulated by biological factors such as the number and the timing of returning fish in relation to the annual spawning window in the fall/winter. In this paper, we develop and apply a novel probabilistic approach to quantify these interactions using a parsimonious outflux-influx model linking the number of female salmon emigrating (i.e., outflux) and returning (i.e., influx) to a spawning stream in Scotland. The model explicitly accounts for the interannual variability of the hydrologic regime and the hydrological connectivity of spawning streams to main rivers. Model results are evaluated against a detailed long-term (40 years) hydroecological data set that includes annual fluxes of salmon, allowing us to explicitly assess the role of discharge variability. The satisfactory model results show quantitatively that hydrologic variability contributes to the observed dynamics of salmon returns, with a good correlation between the positive (negative) peaks in the immigration data set and the exceedance (nonexceedance) probability of a threshold flow (0.3 m3/s). Importantly, model performance deteriorates when the interannual variability of flow regime is disregarded. The analysis suggests that flow thresholds and hydrological connectivity for spawning return represent a quantifiable and predictable feature of salmon rivers, which may be helpful in decision making where flow regimes are altered by water abstractions.
Colour thresholding and objective quantification in bioimaging
NASA Technical Reports Server (NTRS)
Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.
1992-01-01
Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.
Garrusi, Behshid; Baneshi, Mohammad Reza
2013-01-01
Backgrounds: Many socio cultural variables could be affect eating disorders in Asian countries. In Iran, there are few researches regarding eating disorders and their contributing factors. The aim of this study is to explore frequency of eating disorders and their risk factors in an Iranian population. Materials and Methods: About 1204 participants were selected aged between fourteen to 55 years. Frequency of eating disorders and effects of variables such as demographic characteristics, Body Mass Index (BMI), use of media, body dissatisfaction, self-esteem, social comparison and social pressure for thinness in individuals with and without eating disorders, were assessed. Findings: The prevalence of eating disorders was 11.5% that included 0.8% anorexia nervosa, 6.2% full threshold bulimia nervosa, 1.4% sub threshold anorexia nervosa and 30% sub threshold binge eating disorder. Symptoms of bulimic syndrome were greater in males. Conclusion: In Iran, eating disorders and related problems are new issue that could be mentioned seriously The identification of these disorders and their related contributing factors are necessity of management and preventive programs planning. PMID:23283053
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg
2016-06-01
Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.
Sperling, Milena P. R.; Simões, Rodrigo P.; Caruso, Flávia C. R.; Mendes, Renata G.; Arena, Ross; Borghi-Silva, Audrey
2016-01-01
ABSTRACT Background Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. Objectives To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Method Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Results Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). Conclusion HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation. PMID:27556384
Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C
2018-04-01
Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Klass, Malgorzata; Duchateau, Jacques; Enoka, Roger M.
2012-01-01
The purpose of this study was to record the discharge characteristics of tibialis anterior motor units over a range of target forces and to import these data, along with previously reported observations, into a computational model to compare experimental and simulated measures of torque variability during isometric contractions with the dorsiflexor muscles. The discharge characteristics of 44 motor units were quantified during brief isometric contractions at torques that ranged from recruitment threshold to an average of 22 ± 14.4% maximal voluntary contraction (MVC) torque above recruitment threshold. The minimal [range: 5.8–19.8 pulses per second (pps)] and peak (range: 8.6–37.5 pps) discharge rates of motor units were positively related to the recruitment threshold torque (R2 ≥ 0.266; P < 0.001). The coefficient of variation for interspike interval at recruitment was positively associated with recruitment threshold torque (R2 = 0.443; P < 0.001) and either decreased exponentially or remained constant as target torque increased above recruitment threshold torque. The variability in the simulated torque did not differ from the experimental values once the recruitment range was set to ∼85% MVC torque, and the association between motor twitch contraction times and peak twitch torque was defined as a weak linear association (R2 = 0.096; P < 0.001). These results indicate that the steadiness of isometric contractions performed with the dorsiflexor muscle depended more on the distributions of mechanical properties than discharge properties across the population of motor units in the tibialis anterior. PMID:22442023
NASA Astrophysics Data System (ADS)
Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo
2017-06-01
Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Jesunathadas, Mark; Klass, Malgorzata; Duchateau, Jacques; Enoka, Roger M
2012-06-01
The purpose of this study was to record the discharge characteristics of tibialis anterior motor units over a range of target forces and to import these data, along with previously reported observations, into a computational model to compare experimental and simulated measures of torque variability during isometric contractions with the dorsiflexor muscles. The discharge characteristics of 44 motor units were quantified during brief isometric contractions at torques that ranged from recruitment threshold to an average of 22 ± 14.4% maximal voluntary contraction (MVC) torque above recruitment threshold. The minimal [range: 5.8-19.8 pulses per second (pps)] and peak (range: 8.6-37.5 pps) discharge rates of motor units were positively related to the recruitment threshold torque (R(2) ≥ 0.266; P < 0.001). The coefficient of variation for interspike interval at recruitment was positively associated with recruitment threshold torque (R(2) = 0.443; P < 0.001) and either decreased exponentially or remained constant as target torque increased above recruitment threshold torque. The variability in the simulated torque did not differ from the experimental values once the recruitment range was set to ∼85% MVC torque, and the association between motor twitch contraction times and peak twitch torque was defined as a weak linear association (R(2) = 0.096; P < 0.001). These results indicate that the steadiness of isometric contractions performed with the dorsiflexor muscle depended more on the distributions of mechanical properties than discharge properties across the population of motor units in the tibialis anterior.
Source accuracy data reveal the thresholded nature of human episodic memory.
Harlow, Iain M; Donaldson, David I
2013-04-01
Episodic recollection supports conscious retrieval of past events. It is unknown why recollected memories are often vivid, but at other times we struggle to remember. Such experiences might reflect a recollection threshold: Either the threshold is exceeded and information is retrieved, or recollection fails completely. Alternatively, retrieval failure could reflect weak memory: Recollection could behave as a continuous signal, always yielding some variable degree of information. Here we reconcile these views, using a novel source memory task that measures retrieval accuracy directly. We show that recollection is thresholded, such that retrieval sometimes simply fails. Our technique clarifies a fundamental property of memory and allows responses to be accurately measured, without recourse to subjective introspection. These findings raise new questions about how successful retrieval is determined and why it declines with age and disease.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
2016-01-01
We conducted a literature review of reported temperature, salinity, pH, depth and oxygen preferences and thresholds of important marine species found in the Gulf of St. Lawrence and Scotian Shelf region. We classified 54 identified fishes and macroinvertebrates as important either because they support a commercial fishery, have threatened or at risk status, or meet one of the following criteria: bycatch, baitfish, invasive, vagrant, important for ecosystem energy transfer, or predators or prey of the above species. The compiled data allow an assessment of species-level impacts including physiological stress and mortality given predictions of future ocean physical and biogeochemical conditions. If an observed, multi-decadal oxygen trend on the central Scotian Shelf continues, a number of species will lose favorable oxygen conditions, experience oxygen-stress, or disappear due to insufficient oxygen in the coming half-century. Projected regional trends and natural variability are both large, and natural variability will act to alternately amplify and dampen anthropogenic changes. When estimates of variability are included with the trend, species encounter unfavourable oxygen conditions decades sooner. Finally, temperature and oxygen thresholds of adult Atlantic wolffish (Anarhichas lupus) and adult Atlantic cod (Gadus morhua) are assessed in the context of a potential future scenario derived from high-resolution ocean models for the central Scotian Shelf. PMID:27997536
Brennan, Catherine E; Blanchard, Hannah; Fennel, Katja
2016-01-01
We conducted a literature review of reported temperature, salinity, pH, depth and oxygen preferences and thresholds of important marine species found in the Gulf of St. Lawrence and Scotian Shelf region. We classified 54 identified fishes and macroinvertebrates as important either because they support a commercial fishery, have threatened or at risk status, or meet one of the following criteria: bycatch, baitfish, invasive, vagrant, important for ecosystem energy transfer, or predators or prey of the above species. The compiled data allow an assessment of species-level impacts including physiological stress and mortality given predictions of future ocean physical and biogeochemical conditions. If an observed, multi-decadal oxygen trend on the central Scotian Shelf continues, a number of species will lose favorable oxygen conditions, experience oxygen-stress, or disappear due to insufficient oxygen in the coming half-century. Projected regional trends and natural variability are both large, and natural variability will act to alternately amplify and dampen anthropogenic changes. When estimates of variability are included with the trend, species encounter unfavourable oxygen conditions decades sooner. Finally, temperature and oxygen thresholds of adult Atlantic wolffish (Anarhichas lupus) and adult Atlantic cod (Gadus morhua) are assessed in the context of a potential future scenario derived from high-resolution ocean models for the central Scotian Shelf.
Uncertainty evaluation of a regional real-time system for rain-induced landslides
NASA Astrophysics Data System (ADS)
Kirschbaum, Dalia; Stanley, Thomas; Yatheendradas, Soni
2015-04-01
A new prototype regional model and evaluation framework has been developed over Central America and the Caribbean region using satellite-based information including precipitation estimates, modeled soil moisture, topography, soils, as well as regionally available datasets such as road networks and distance to fault zones. The algorithm framework incorporates three static variables: a susceptibility map; a 24-hr rainfall triggering threshold; and an antecedent soil moisture variable threshold, which have been calibrated using historic landslide events. The thresholds are regionally heterogeneous and are based on the percentile distribution of the rainfall or antecedent moisture time series. A simple decision tree algorithm framework integrates all three variables with the rainfall and soil moisture time series and generates a landslide nowcast in real-time based on the previous 24 hours over this region. This system has been evaluated using several available landslide inventories over the Central America and Caribbean region. Spatiotemporal uncertainty and evaluation metrics of the model are presented here based on available landslides reports. This work also presents a probabilistic representation of potential landslide activity over the region which can be used to further refine and improve the real-time landslide hazard assessment system as well as better identify and characterize the uncertainties inherent in this type of regional approach. The landslide algorithm provides a flexible framework to improve hazard estimation and reduce uncertainty at any spatial and temporal scale.
NASA Astrophysics Data System (ADS)
Segoni, S.; Battistini, A.; Rossi, G.; Rosi, A.; Lagomarsino, D.; Catani, F.; Moretti, S.; Casagli, N.
2015-04-01
We set up an early warning system for rainfall-induced landslides in Tuscany (23 000 km2). The system is based on a set of state-of-the-art intensity-duration rainfall thresholds (Segoni et al., 2014b) and makes use of LAMI (Limited Area Model Italy) rainfall forecasts and real-time rainfall data provided by an automated network of more than 300 rain gauges. The system was implemented in a WebGIS to ease the operational use in civil protection procedures: it is simple and intuitive to consult, and it provides different outputs. When switching among different views, the system is able to focus both on monitoring of real-time data and on forecasting at different lead times up to 48 h. Moreover, the system can switch between a basic data view where a synoptic scenario of the hazard can be shown all over the region and a more in-depth view were the rainfall path of rain gauges can be displayed and constantly compared with rainfall thresholds. To better account for the variability of the geomorphological and meteorological settings encountered in Tuscany, the region is subdivided into 25 alert zones, each provided with a specific threshold. The warning system reflects this subdivision: using a network of more than 300 rain gauges, it allows for the monitoring of each alert zone separately so that warnings can be issued independently. An important feature of the warning system is that the visualization of the thresholds in the WebGIS interface may vary in time depending on when the starting time of the rainfall event is set. The starting time of the rainfall event is considered as a variable by the early warning system: whenever new rainfall data are available, a recursive algorithm identifies the starting time for which the rainfall path is closest to or overcomes the threshold. This is considered the most hazardous condition, and it is displayed by the WebGIS interface. The early warning system is used to forecast and monitor the landslide hazard in the whole region, providing specific alert levels for 25 distinct alert zones. In addition, the system can be used to gather, analyze, display, explore, interpret and store rainfall data, thus representing a potential support to both decision makers and scientists.
The Rasch Rating Model and the Disordered Threshold Controversy
ERIC Educational Resources Information Center
Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark
2012-01-01
The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…
Interrater Agreement Evaluation: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.
2013-01-01
A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…
Seasonal variation in sports participation.
Schüttoff, Ute; Pawlowski, Tim
2018-02-01
This study explores indicators describing socio-demographics, sports participation characteristics and motives which are associated with variation in sports participation across seasons. Data were drawn from the German Socio-Economic Panel which contains detailed information on the sports behaviour of adults in Germany. Overall, two different measures of seasonal variation are developed and used as dependent variables in our regression models. The first variable measures the coefficient of (seasonal) variation in sport-related energy expenditure per week. The second variable measures whether activity drops below the threshold as defined by the World Health Organization (WHO). Results suggest that the organisational setting, the intensity and number of sports practised, and the motive for participation are strongly correlated with the variation measures used. For example, both, participation in a sports club and a commercial facility, are associated with reduced seasonal variation and a significantly higher probability of participating at a volume above the WHO threshold across all seasons. These findings give some impetus for policymaking and the planning of sports programmes as well as future research directions.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
Variable percolation threshold of composites with fiber fillers under compression
NASA Astrophysics Data System (ADS)
Lin, Chuan; Wang, Hongtao; Yang, Wei
2010-07-01
The piezoresistant effect in conducting fiber-filled composites has been studied by a continuum percolation model. Simulation was performed by a Monte Carlo method that took into account both the deformation-induced fiber bending and rotation. The percolation threshold was found to rise with the compression strain, which explains the observed positive piezoresistive coefficients in such composites. The simulations unveiled the effect of the microstructure evolution during deformation. The fibers are found to align perpendicularly to the compression direction. As the fiber is bended, the effective length in making a conductive network is shortened. Both effects contribute to a larger percolation threshold and imply a positive piezoresistive coefficient according the universal power law.
Upper stimulation threshold for retinal ganglion cell activation.
Meng, Kevin; Fellner, Andreas; Rattay, Frank; Ghezzi, Diego; Meffin, Hamish; Ibbotson, Michael R; Kameneva, Tatiana
2018-08-01
The existence of an upper threshold in electrically stimulated retinal ganglion cells (RGCs) is of interest because of its relevance to the development of visual prosthetic devices, which are designed to restore partial sight to blind patients. The upper threshold is defined as the stimulation level above which no action potentials (direct spikes) can be elicited in electrically stimulated retina. We collected and analyzed in vitro recordings from rat RGCs in response to extracellular biphasic (anodic-cathodic) pulse stimulation of varying amplitudes and pulse durations. Such responses were also simulated using a multicompartment model. We identified the individual cell variability in response to stimulation and the phenomenon known as upper threshold in all but one of the recorded cells (n = 20/21). We found that the latencies of spike responses relative to stimulus amplitude had a characteristic U-shape. In silico, we showed that the upper threshold phenomenon was observed only in the soma. For all tested biphasic pulse durations, electrode positions, and pulse amplitudes above lower threshold, a propagating action potential was observed in the distal axon. For amplitudes above the somatic upper threshold, the axonal action potential back-propagated in the direction of the soma, but the soma's low level of hyperpolarization prevented action potential generation in the soma itself. An upper threshold observed in the soma does not prevent spike conductance in the axon.
Audiometric Predictions Using SFOAE and Middle-Ear Measurements
Ellison, John C.; Keefe, Douglas H.
2006-01-01
Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898
Simplified 4-item criteria for polycystic ovary syndrome: A bridge too far?
Indran, Inthrani R; Huang, Zhongwei; Khin, Lay Wai; Chan, Jerry K Y; Viardot-Foucault, Veronique; Yong, Eu Leong
2018-05-30
Although the Rotterdam 2003 polycystic ovarian syndrome (PCOS) diagnostic criteria is widely used, the need to consider multiple variables makes it unwieldy in clinical practice. We propose a simplified PCOS criteria wherein diagnosis is made if two of the following three items were present: (i) oligomenorrhoea, (ii) anti-mullerian hormone (AMH) above threshold and/or (iii) hyperandrogenism defined as either testosterone above threshold and/or the presence of hirsutism. This prospective cross-sectional study consists of healthy women (n = 157) recruited at an annual hospital health screen for staff and volunteers from the university community, and a patient cohort (n = 174) comprising women referred for suspected PCOS. We used the healthy cohort to establish threshold values for serum testosterone, antral follicle counts (AFC), ovarian volume (OV) and AMH. Women from the patient cohort, classified as PCOS by simplified PCOS criteria, AMH alone and Rotterdam 2003, were compared with respect to prevalence of oligomenorrhoea, hyperandrogenism and metabolic indices. In healthy women, testosterone ≥1.89 nmol/L, AFC ≥22 follicles and OV ≥8.44 mL, best predicted oligomenorrhoea and were used as threshold values for PCOS criteria. An AMH level ≥37.0 pmol/L best predicted polycystic ovarian morphology. AMH alone as a single biomarker demonstrated poor specificity (58.9%) for PCOS compared to Rotterdam 2003. In contrast, there was a 94% overlap in women selected as PCOS by the simplified PCOS criteria and Rotterdam 2003. The population characteristics of these two groups of PCOS women showed no significant mean differences in androgenic, ovarian, AMH and metabolic (BMI, HOMA-IR) variables. Our data recommend the simplified PCOS criteria with population-specific thresholds for diagnosis of PCOS. Its ability to replace ovarian ultrasound biometry with the highly correlated variable AMH, and use of testosterone as a single marker for hyperandrogenaemia alongside the key symptoms of oligomenorrhoea and hirsutism confers significant clinical potential for the diagnosis of PCOS. © 2018 John Wiley & Sons Ltd.
Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes.
Sales, Marcelo M; Campbell, Carmen Sílvia G; Morais, Pâmella K; Ernesto, Carlos; Soares-Caldeira, Lúcio F; Russo, Paulo; Motta, Daisy F; Moreira, Sérgio R; Nakamura, Fábio Y; Simões, Herbert G
2011-01-12
While several studies have identified the anaerobic threshold (AT) through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV) as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT) and ventilatory threshold (VT) for individuals with type 2 diabetes (T2D) has not been investigated yet. To analyze the possibility of identifying the heart rate variability threshold (HRVT) by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D) and non diabetics individuals (ND). Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1)) and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1)) underwent to an incremental exercise test (IT) on a cycle ergometer. Heart rate (HR), rate of perceived exertion (RPE), blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD) and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1) by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1), respectively. No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87) and ND (r = 0.91 to 0.98) and the Bland & Altman technique confirmed the agreement among them. The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD) were demonstrated to be valid to estimate the LT and VT for both T2D and ND.
Anaerobic Threshold: Its Concept and Role in Endurance Sport
Ghosh, Asok Kumar
2004-01-01
aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO2 max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO2 max, because sustaining a high fractional utilization of the VO2 max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports. PMID:22977357
Anaerobic threshold: its concept and role in endurance sport.
Ghosh, Asok Kumar
2004-01-01
aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO(2) max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO(2) max, because sustaining a high fractional utilization of the VO(2) max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports.
Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration
Platkiewicz, Jonathan; Brette, Romain
2011-01-01
Neurons spike when their membrane potential exceeds a threshold value. In central neurons, the spike threshold is not constant but depends on the stimulation. Thus, input-output properties of neurons depend both on the effect of presynaptic spikes on the membrane potential and on the dynamics of the spike threshold. Among the possible mechanisms that may modulate the threshold, one strong candidate is Na channel inactivation, because it specifically impacts spike initiation without affecting the membrane potential. We collected voltage-clamp data from the literature and we found, based on a theoretical criterion, that the properties of Na inactivation could indeed cause substantial threshold variability by itself. By analyzing simple neuron models with fast Na inactivation (one channel subtype), we found that the spike threshold is correlated with the mean membrane potential and negatively correlated with the preceding depolarization slope, consistent with experiments. We then analyzed the impact of threshold dynamics on synaptic integration. The difference between the postsynaptic potential (PSP) and the dynamic threshold in response to a presynaptic spike defines an effective PSP. When the neuron is sufficiently depolarized, this effective PSP is briefer than the PSP. This mechanism regulates the temporal window of synaptic integration in an adaptive way. Finally, we discuss the role of other potential mechanisms. Distal spike initiation, channel noise and Na activation dynamics cannot account for the observed negative slope-threshold relationship, while adaptive conductances (e.g. K+) and Na inactivation can. We conclude that Na inactivation is a metabolically efficient mechanism to control the temporal resolution of synaptic integration. PMID:21573200
The Biological and Toxicological Activity of Gases and Vapors
Sánchez-Moreno, Ricardo; Gil-Lostes, Javier; Acree, William E.; Cometto-Muñiz, J. Enrique; Cain, William S.
2010-01-01
A large amount of data on the biological and toxicological activity of gases and vapors has been collected from the literature. Processes include sensory irritation thresholds, the Alarie mouse test, inhalation anesthesia, etc. It is shown that a single equation using only five descriptors (properties of the gases and vapors) plus a set of indicator variables for the given processes can correlate 643 biological and non-lethal toxicological activities of ‘non-reactive’ compounds with a standard deviation of 0.36 log unit. The equation is scaled to sensory irritation thresholds obtained by the procedure of Cometto-Muñiz, and Cain, and provides a general equation for the prediction of sensory irritation thresholds in man. It is suggested that differences in biological/toxicological activity arise primarily from transport from the gas phase to a receptor phase or area, except for odor detection thresholds where interaction with a receptor(s) is important. PMID:19913608
Effect of threshold disorder on the quorum percolation model
NASA Astrophysics Data System (ADS)
Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel
2016-07-01
We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.
Wetzel, Hermann
2006-01-01
In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
Lansdowne, Krystal; Strauss, David G; Scully, Christopher G
2016-01-01
The cacophony of alerts and alarms in a hospital produced by medical devices results in alarm fatigue. The pulse oximeter is one of the most common sources of alarms. One of the ways to reduce alarm rates is to adjust alarm settings at the bedside. This study is aimed to retrospectively examine individual pulse oximeter alarm settings on alarm rates and inter- and intra- patient variability. Nine hundred sixty-two previously collected intensive care unit (ICU) patient records were obtained from the Multiparameter Intelligent Monitoring in Intensive Care II Database (Beth Israel Deaconess Medical Center, Boston, MA). Inclusion criteria included patient records that contained SpO2 trend data sampled at 1 Hz for at least 1 h and a matching clinical record. SpO2 alarm rates were simulated by applying a range of thresholds (84, 86, 88, and 90 %) and delay times (10 to 60 s) to the SpO2 data. Patient records with at least 12 h of SpO2 data were examined for the variability in alarm rate over time. Decreasing SpO2 thresholds and increasing delay times resulted in decreased alarm rates. A limited number of patient records accounted for most alarms, and this number increased as alarm settings loosened (the top 10 % of patient records were responsible for 57.4 % of all alarms at an SpO2 threshold of 90 % and 15 s delay and 81.6 % at an SpO2 threshold of 84 % and 45 s delay). Alarm rates were not consistent over time for individual patients with periods of high and low alarms for all alarm settings. Pulse oximeter SpO2 alarm rates are variable between patients and over time, and the alarm rate and the extent of inter- and intra-patient variability can be affected by the alarm settings. Personalized alarm settings for a patient's current status may help to reduce alarm fatigue for nurses.
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Shiraishi, Yasuyuki; Katsumata, Yoshinori; Sadahiro, Taketaro; Azuma, Koichiro; Akita, Keitaro; Isobe, Sarasa; Yashima, Fumiaki; Miyamoto, Kazutaka; Nishiyama, Takahiko; Tamura, Yuichi; Kimura, Takehiro; Nishiyama, Nobuhiro; Aizawa, Yoshiyasu; Fukuda, Keiichi; Takatsuki, Seiji
2018-01-07
It has never been possible to immediately evaluate heart rate variability (HRV) during exercise. We aimed to visualize the real-time changes in the power spectrum of HRV during exercise and to investigate its relationship to the ventilatory threshold (VT). Thirty healthy subjects (29.1±5.7 years of age) and 35 consecutive patients (59.0±13.2 years of age) with myocardial infarctions underwent cardiopulmonary exercise tests with an RAMP protocol ergometer. The HRV was continuously assessed with power spectral analyses using the maximum entropy method and projected on a screen without delay. During exercise, a significant decrease in the high frequency (HF) was followed by a drastic shift in the power spectrum of the HRV with a periodic augmentation in the low frequency/HF (L/H) and steady low HF. When the HRV threshold (HRVT) was defined as conversion from a predominant high frequency (HF) to a predominant low frequency/HF (L/H), the VO 2 at the HRVT (HRVT-VO 2 ) was substantially correlated with the VO 2 at the lactate threshold and VT) in the healthy subjects ( r =0.853 and 0.921, respectively). The mean difference between each threshold (0.65 mL/kg per minute for lactate threshold and HRVT, 0.53 mL/kg per minute for VT and HRVT) was nonsignificant ( P >0.05). Furthermore, the HRVT-VO 2 was also correlated with the VT-VO 2 in these myocardial infarction patients ( r =0.867), and the mean difference was -0.72 mL/kg per minute and was nonsignificant ( P >0.05). A HRV analysis with our method enabled real-time visualization of the changes in the power spectrum during exercise. This can provide additional information for detecting the VT. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women
Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.
2015-01-01
Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007
1981-12-01
occurred on the Introversion Scale of the NMPI. 20 A review of the use of psychological tests on MT’s was accomplished by Driver and Feeley [1974...programs, Gondek [1981] has recommended that the best pro- cedure for variable inclusion when using a stepwise procedure is to use the threshold default...values supplied by the package, since no simple rules exist for determining entry or removal thresholds for partial F’s, tolerance statistics, or any of
Mo, Xueyin; Zhang, Jinglu; Fan, Yuan; Svensson, Peter; Wang, Kelun
2015-01-01
To explore the hypothesis that burning mouth syndrome (BMS) probably is a neuropathic pain condition, thermal and mechanical sensory and pain thresholds were tested and compared with age- and gender-matched control participants using a standardized battery of psychophysical techniques. Twenty-five BMS patients (men: 8, women: 17, age: 49.5 ± 11.4 years) and 19 age- and gender-matched healthy control participants were included. The cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical detection threshold (MDT) and mechanical pain threshold (MPT), in accordance with the German Network of Neuropathic Pain guidelines, were measured at the following four sites: the dorsum of the left hand (hand), the skin at the mental foramen (chin), on the tip of the tongue (tongue), and the mucosa of the lower lip (lip). Statistical analysis was performed using ANOVA with repeated measures to compare the means within and between groups. Furthermore, Z-score profiles were generated, and exploratory correlation analyses between QST and clinical variables were performed. Two-tailed tests with a significance level of 5 % were used throughout. CDTs (P < 0.02) were significantly lower (less sensitivity) and HPTs (P < 0.001) were significantly higher (less sensitivity) at the tongue and lip in BMS patients compared to control participants. WDT (P = 0.007) was also significantly higher at the tongue in BMS patients compared to control subjects . There were no significant differences in MDT and MPT between the BMS patients and healthy subjects at any of the four test sites. Z-scores showed that significant loss of function can be identified for CDT (Z-scores = -0.9±1.1) and HPT (Z-scores = 1.5±0.4). There were no significant correlations between QST and clinical variables (pain intensity, duration, depressions scores). BMS patients had a significant loss of thermal function but not mechanical function, supporting the hypothesis that BMS may be a probable neuropathic pain condition. Further studies including e.g. electrophysiological or imaging techniques are needed to clarify the underlying mechanisms of BMS.
Zamengo, Luca; Frison, Giampietro; Tedeschi, Gianpaola; Frasson, Samuela; Zancanaro, Flavio; Sciarrone, Rocco
2014-10-01
The measurement of blood-alcohol content (BAC) is a crucial analytical determination required to assess if an offence (e.g. driving under the influence of alcohol) has been committed. For various reasons, results of forensic alcohol analysis are often challenged by the defence. As a consequence, measurement uncertainty becomes a critical topic when assessing compliance with specification limits for forensic purposes. The aims of this study were: (1) to investigate major sources of variability for BAC determinations; (2) to estimate measurement uncertainty for routine BAC determinations; (3) to discuss the role of measurement uncertainty in compliance assessment; (4) to set decision rules for a multiple BAC threshold law, as provided in the Italian Highway Code; (5) to address the topic of the zero-alcohol limit from the forensic toxicology point of view; and (6) to discuss the role of significant figures and rounding errors on measurement uncertainty and compliance assessment. Measurement variability was investigated by the analysis of data collected from real cases and internal quality control. The contribution of both pre-analytical and analytical processes to measurement variability was considered. The resulting expanded measurement uncertainty was 8.0%. Decision rules for the multiple BAC threshold Italian law were set by adopting a guard-banding approach. 0.1 g/L was chosen as cut-off level to assess compliance with the zero-alcohol limit. The role of significant figures and rounding errors in compliance assessment was discussed by providing examples which stressed the importance of these topics for forensic purposes. Copyright © 2014 John Wiley & Sons, Ltd.
Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit
2015-01-01
The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.
Further studies to extend and test the area-time-integral technique applied to satellite data
NASA Technical Reports Server (NTRS)
Smith, Paul L.; Vonderhaar, Thomas H.
1993-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'variable-threshold approach'. In the former approach, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are most closely related to the corresponding rainfall amounts. Results thus far have indicated that a strong correlation exists between the rain volumes and the satellite ATI values, but the optimum threshold for this relationship seems to differ from one geographic location to another. The difference is probably related to differences in the basic precipitation mechanisms that dominate in the different regions. The average rainfall rate associated with each cloudy pixel is also found to vary across the spectrum of ATI values. Work on the second, or 'variable-threshold', approach for determining the satellite ATI values was essentially suspended during this period due to exhaustion of project funds. Most of the ATI work thus far has dealt with cloud clusters from the Lagrangian or 'floating-target' point of view. For many purposes, however, the Eulerian or 'fixed-target' perspective is more appropriate. For a very large target area encompassing entire cluster life histories, the rain volume-ATI relationship would obviously be the same in either case. The important question for the Eulerian perspective is how small the fixed area can be made while maintaining consistency in that relationship.
Predicting the susceptibility to gully initiation in data-poor regions
NASA Astrophysics Data System (ADS)
Dewitte, Olivier; Daoudi, Mohamed; Bosco, Claudio; Van Den Eeckhaut, Miet
2015-01-01
Permanent gullies are common features in many landscapes and quite often they represent the dominant soil erosion process. Once a gully has initiated, field evidence shows that gully channel formation and headcut migration rapidly occur. In order to prevent the undesired effects of gullying, there is a need to predict the places where new gullies might initiate. From detailed field measurements, studies have demonstrated strong inverse relationships between slope gradient of the soil surface (S) and drainage area (A) at the point of channel initiation across catchments in different climatic and morphological environments. Such slope-area thresholds (S-A) can be used to predict locations in the landscape where gullies might initiate. However, acquiring S-A requires detailed field investigations and accurate high resolution digital elevation data, which are usually difficult to acquire. To circumvent this issue, we propose a two-step method that uses published S-A thresholds and a logistic regression analysis (LR). S-A thresholds from the literature are used as proxies of field measurement. The method is calibrated and validated on a watershed, close to the town of Algiers, northern Algeria, where gully erosion affects most of the slopes. The gullies extend up to several kilometres in length and cover 16% of the study area. First we reconstruct the initiation areas of the existing gullies by applying S-A thresholds for similar environments. Then, using the initiation area map as the dependent variable with combinations of topographic and lithological predictor variables, we calibrate several LR models. It provides relevant results in terms of statistical reliability, prediction performance, and geomorphological significance. This method using S-A thresholds with data-driven assessment methods like LR proves to be efficient when applied to common spatial data and establishes a methodology that will allow similar studies to be undertaken elsewhere.
Impact of beta-blockers on cardiopulmonary exercise testing in patients with advanced liver disease.
Wallen, M P; Hall, A; Dias, K A; Ramos, J S; Keating, S E; Woodward, A J; Skinner, T L; Macdonald, G A; Arena, R; Coombes, J S
2017-10-01
Patients with advanced liver disease may develop portal hypertension that can result in variceal haemorrhage. Beta-blockers reduce portal pressure and minimise haemorrhage risk. These medications may attenuate measures of cardiopulmonary performance, such as the ventilatory threshold and peak oxygen uptake measured via cardiopulmonary exercise testing. To determine the effect of beta-blockers on cardiopulmonary exercise testing variables in patients with advanced liver disease. This was a cross-sectional analysis of 72 participants who completed a cardiopulmonary exercise test before liver transplantation. All participants remained on their usual beta-blocker dose and timing prior to the test. Variables measured during cardiopulmonary exercise testing included the ventilatory threshold, peak oxygen uptake, heart rate, oxygen pulse, the oxygen uptake efficiency slope and the ventilatory equivalents for carbon dioxide slope. Participants taking beta-blockers (n = 28) had a lower ventilatory threshold (P <.01) and peak oxygen uptake (P = .02), compared to participants not taking beta-blockers. After adjusting for age, the model of end-stage liver-disease score, liver-disease aetiology, presence of refractory ascites and ventilatory threshold remained significantly lower in the beta-blocker group (P = .04). The oxygen uptake efficiency slope was not impacted by beta-blocker use. Ventilatory threshold is reduced in patients with advanced liver disease taking beta-blockers compared to those not taking the medication. This may incorrectly risk stratify patients on beta-blockers and has implications for patient management before and after liver transplantation. The oxygen uptake efficiency slope was not influenced by beta-blockers and may therefore be a better measure of cardiopulmonary performance in this patient population. © 2017 John Wiley & Sons Ltd.
A method for managing re-identification risk from small geographic areas in Canada
2010-01-01
Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers) approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs) by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes). Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%), depending on the mitigating controls that the data recipients have in place, the potential invasion of privacy if the data is disclosed, and the motives and capacity of the data recipient to re-identify the data. Conclusion The models we developed can be used to manage the re-identification risk from small geographic areas. Being able to choose among three possible thresholds, a data custodian can adjust the definition of "small geographic area" to the nature of the data and recipient. PMID:20361870
Streamflow losses in the Black Hills of western South Dakota
Hortness, Jon E.; Driscoll, Daniel G.
1998-01-01
Losses occur in numerous streams that cross outcrops of various sedimentary rocks that are exposed around the periphery of the Black Hills of South Dakota. These streamflow losses are recognized as an important source of local recharge to regional bedrock aquifers. Most streams lose all of their flow up to some threshold rate. Streamflow is maintained through a loss zone when the threshold is exceeded. Streamflow records for 86 measurement sites are used to determine bedrock loss thresholds for 24 area streams, which have individual loss thresholds that range from negligible (no loss) to as much as 50 cubic feet per second. In addition, insights are provided regarding springflow that occurs in the immediate vicinity of selected loss zones. Most losses occur to outcrops of the Madison Limestone and Minnelusa Formation. Losses to the Deadwood Formation probably are minimal. Losses to the Minnekahta Limestone generally are small; however, they are difficult to quantify because of potential losses to extensive alluvial deposits that commonly are located near Minnekahta outcrops. Loss thresholds for each stream are shown to be relatively constant, without measurable effects from streamflow rates or duration of flow through the loss zones. Calculated losses for measurements made during high-flow conditions generally have larger variability than calculated losses for low-flow conditions; however, consistent relations between losses and streamflow have not been identified. Some of this variability results from the inability to account for tributary inflows and changes in storage. Calculated losses are shown to decrease, in some cases, during periods of extended flow through loss zones. Decreased 'net' losses, however, generally can be attributed to springflow (ground-water discharge) within a loss zone, which may occur during prolonged periods of wet climatic conditions. Losses to unsaturated alluvial deposits located adjacent to the stream channels are found to have significant effects on determination of bedrock losses. Large losses occur in filling initial storage in unsaturated alluvial deposits downstream from loss zones, when bedrock loss thresholds are first exceeded. Losses to alluvial deposits in the range of tens of cubic feet per second and alluvial storage capacities in the range of hundreds of acre-feet are documented. Significant changes in loss thresholds for Grace Coolidge Creek, Spring Creek, and Whitewood Creek are documented. Introduction of large quantities of fine-grained sediments into these stream channels may have affected loss thresholds for various periods of time.
NASA Astrophysics Data System (ADS)
Segoni, S.; Battistini, A.; Rossi, G.; Rosi, A.; Lagomarsino, D.; Catani, F.; Moretti, S.; Casagli, N.
2014-10-01
We set up an early warning system for rainfall-induced landslides in Tuscany (23 000 km2). The system is based on a set of state-of-the-art intensity-duration rainfall thresholds (Segoni et al., 2014b), makes use of LAMI rainfall forecasts and real-time rainfall data provided by an automated network of more than 300 rain-gauges. The system was implemented in a WebGIS to ease the operational use in civil protection procedures: it is simple and intuitive to consult and it provides different outputs. Switching among different views, the system is able to focus both on monitoring of real time data and on forecasting at different lead times up to 48 h. Moreover, the system can switch between a very straightforward view where a synoptic scenario of the hazard can be shown all over the region and a more in-depth view were the rainfall path of rain-gauges can be displayed and constantly compared with rainfall thresholds. To better account for the high spatial variability of the physical features, which affects the relationship between rainfall and landslides, the region is subdivided into 25 alert zones, each provided with a specific threshold. The warning system reflects this subdivision: using a network of 332 rain gauges, it allows monitoring each alert zone separately and warnings can be issued independently from an alert zone to another. An important feature of the warning system is the use of thresholds that may vary in time adapting at the conditions of the rainfall path recorded by the rain-gauges. Depending on when the starting time of the rainfall event is set, the comparison with the threshold may produce different outcomes. Therefore, a recursive algorithm was developed to check and compare with the thresholds all possible starting times, highlighting the worst scenario and showing in the WebGIS interface at what time and how much the rainfall path has exceeded or will exceed the most critical threshold. Besides forecasting and monitoring the hazard scenario over the whole region with hazard levels differentiated for 25 distinct alert zones, the system can be used to gather, analyze, visualize, explore, interpret and store rainfall data, thus representing a potential support to both decision makers and scientists.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K
2013-09-01
Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.
Identification of sleep bruxism with an ambulatory wireless recording system.
Inano, Shinji; Mizumori, Takahiro; Kobayashi, Yasuyoshi; Sumiya, Masakazu; Yatani, Hirofumi
2013-01-01
To examine whether an ambulatory bruxism recording system, including a biologic monitor, that measures sleep variables and sympatho-vagal balance can specifically identify sleep bruxism (SB) at home. Twenty-six volunteers, including 16 SB subjects, were recruited. Each participant recorded his or her electromyogram (EMG), sympatho-vagal balance, and sound level for 3 consecutive nights using an audio-video recorder to identify SB. Data of sleep variables were compared among the 3 experimental nights. The episodes were classified into SB episodes with and without grinding and non-SB episodes. EMG patterns, amplitude, sympatho-vagal balance, and sound level of all episodes were analyzed so as to determine the appropriate thresholds to detect SB episodes and grinding sound. Then, all episodes without video-recording data were classified into SB and non-SB episodes by using the appropriate thresholds, and the sensitivity and specificity to detect SB episodes were calculated. With regard to sleep variables, there were no significant differences except for sleep latency between the first and second nights. The appropriate EMG pattern and thresholds of amplitude, sympatho-vagal balance, and sound level were phasic or mixed EMG pattern, 20% of maximum voluntary contraction, mean + 1 SD, and mean + 2 SDs, respectively. The sensitivity and specificity to detect SB episodes were 88.4% and 74.2%, respectively. The results suggest that this system enables the detection of SB episodes at home with considerably high accuracy and little interference with sleep.
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
Continuous-variable quantum homomorphic signature
NASA Astrophysics Data System (ADS)
Li, Ke; Shang, Tao; Liu, Jian-wei
2017-10-01
Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.
Casanova, I; Diaz, A; Pinto, S; de Carvalho, M
2014-04-01
The technique of threshold tracking to test axonal excitability gives information about nodal and internodal ion channel function. We aimed to investigate variability of the motor excitability measurements in healthy controls, taking into account age, gender, body mass index (BMI) and small changes in skin temperature. We examined the left median nerve of 47 healthy controls using the automated threshold-tacking program, QTRAC. Statistical multiple regression analysis was applied to test relationship between nerve excitability measurements and subject variables. Comparisons between genders did not find any significant difference (P>0.2 for all comparisons). Multiple regression analysis showed that motor amplitude decreases with age and temperature, stimulus-response slope decreases with age and BMI, and that accommodation half-time decrease with age and temperature. The changes related to demographic features on TRONDE protocol parameters are small and less important than in conventional nerve conduction studies. Nonetheless, our results underscore the relevance of careful temperature control, and indicate that interpretation of stimulus-response slope and accommodation half-time should take into account age and BMI. In contrast, gender is not of major relevance to axonal threshold findings in motor nerves. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Variable threshold algorithm for division of labor analyzed as a dynamical system.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro
2014-12-01
Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.
Yang, Yang; Pu, Fang; Lv, Xiaoning; Li, Shuyu; Li, Jing; Li, Deyu; Li, Minggao
2015-01-01
Galvanic vestibular stimulation (GVS) can be used to study the body's response to vestibular stimuli. This study aimed to investigate whether postural responses to GVS were different between pilots and the general populace. Bilateral bipolar GVS was applied with a constant-current profile to 12 pilots and 12 control subjects via two electrodes placed over the mastoid processes. Both GVS threshold and the center of pressure's trajectory (COP's trajectory) were measured. Position variability of COP during spontaneous body sway and peak displacement of COP during GVS-induced body sway were calculated in the medial-lateral direction. Spontaneous body sway was slight for all subjects, and there was no significant difference in the value of COP position variability between the pilots and controls. Both the GVS threshold and magnitude of GVS-induced body deviation were similar for different GVS polarities. GVS thresholds were similar between the two groups, but the magnitude of GVS-induced body deviation in the controls was significantly larger than that in the pilots. The pilots showed less GVS-induced body deviation, meaning that pilots may have a stronger ability to suppress vestibular illusions. PMID:25632395
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
From innervation density to tactile acuity: 1. Spatial representation.
Brown, Paul B; Koerber, H Richard; Millecchia, Ronald
2004-06-11
We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.
Passos, L T; Cruz, E A da; Fischer, V; Porciuncula, G C da; Werncke, D; Dalto, A G C; Stumpf, M T; Vizzotto, E F; da Silveira, I D B
2017-04-01
Lameness can negatively affect production, but there is still controversy about the perception of pain in dairy cows. This study aimed to verify the effects of hoof affections in dairy cows on locomotion score, physiological attributes, pressure nociceptive threshold, and thermographic variables, as well as assess improvement on these variables after corrective trimming and treatment. Thirty-four lame lactating cows were gait-scored, and all cows with locomotion score ≥4 were retained for this study 1 day before trimming. Lame cows were diagnosed, pressure nociceptive threshold at sound, and affected hooves were measured, thermographic images were recorded, and physiological attributes were evaluated. Hooves with lesions were trimmed and treated and cows were re-evaluated 1 week after such procedures. The experimental design was a completely randomized design. Each cow was considered an experimental unit and traits were analyzed using paired t test, linear correlation, and linear regression. Digital and interdigital dermatitis were classified as infectious diseases while laminitis sequels, sole ulcers, and white line were classified as non-infectious diseases. After 1 week, the locomotion score was reduced on average in 1.5 points. Trimming increased the pressure nociceptive threshold for cows with non-infectious affections while tended to increase the pressure nociceptive threshold for cows with infectious affections. Physiological attributes and thermographic values did not change with trimming. Trimming and treatment have benefic effects on animal welfare as gait is improved and sensitivity to pain is reduced.
NASA Astrophysics Data System (ADS)
Avice, J.; Piombini, H.; Boscher, C.; Belleville, P.; Vaudel, G.; Brotons, G.; Ruello, P.; Gusev, V.
2017-11-01
The MegaJoule Laser (LMJ) for inertial confinement fusion experiments is currently in operation at CEA-CESTA in France. All the lenses are coated by an antireflective (AR) layer to optimize the light power transmission. This AR layer is manufactured by sol-gel process, a soft chemical process, associated with a liquid phase coating technique to realize thin film of metal oxide. These optical components are hardened into ammoniac vapors in order to mechanically reinforce the AR coating and to make them more handling. This hardening induces a thickness reduction of the layer so an increase of the stiffness and sometimes a crazing of the layer. As these optical components undergo a high-power laser beam, so, it is important to verify if the AR properties (optical and mechanical) influence the value of the threshold laser damage. A series of coated samples have been manufactured having variable elastic moduli to discuss this point. In that purpose, a homemade Laser Induced Damage Threshold (LIDT) setup has been developed to test the layers under laser flux. We describe the used methods and different results are given. Preliminary results obtained on several coated samples with variable elastic moduli are presented. We show that whatever are the elastic stiffness of the AR coating, an overall decrease of the threshold appears with no noticeable effect of the mechanical properties of the AR coatings. Some possible explanations are given.
Lithium battery management system
Dougherty, Thomas J [Waukesha, WI
2012-05-08
Provided is a system for managing a lithium battery system having a plurality of cells. The battery system comprises a variable-resistance element electrically connected to a cell and located proximate a portion of the cell; and a device for determining, utilizing the variable-resistance element, whether the temperature of the cell has exceeded a predetermined threshold. A method of managing the temperature of a lithium battery system is also included.
1981-11-16
other is not always well defined. 3.0 CORROSIM FATIGUE VARIABLES AND THEIR EFFECTS Corrosion fatigue behavior is pverned, y Ir of variables- environmental...on near threshold fatigue crack growth behavior is primarily a function of environmental reaction in this steel . 3.2 Mechanical Effects Among the...Gallagher""’ and Pao studied the corrosion fatigue behavior of 4340 steel at various * Ifrequencies in distilled water and water vapor, respectively
Motor unit behaviour and contractile changes during fatigue in the human first dorsal interosseus
Carpentier, Alain; Duchateau, Jacques; Hainaut, Karl
2001-01-01
In 67 single motor units, the mechanical properties, the recruitment and derecruitment thresholds, and the discharge rates were recorded concurrently in the first dorsal interosseus (FDI) of human subjects during intermittent fatiguing contractions. The task consisted of isometric ramp-and-hold contractions performed at 50% of the maximal voluntary contraction (MVC). The purpose of this study was to examine the influence of fatigue on the behaviour of motor units with a wide range of activation thresholds. For low-threshold (< 25% MVC) motor units, the mean twitch force increased with fatigue and the recruitment threshold either did not change or increased. In contrast, the twitch force and the activation threshold decreased for the high-threshold (> 25% MVC) units. The observation that in low-threshold motor units a quick stretch of the muscle at the end of the test reset the unit force and recruitment threshold to the prefatigue value suggests a significant role for fatigue-related changes in muscle stiffness but not twitch potentiation or motor unit synchronization. Although the central drive intensified during the fatigue test, as indicated by an increase in surface electromyogram (EMG), the discharge rate of the motor units during the hold phase of each contraction decreased progressively over the course of the task for motor units that were recruited at the beginning of the test, especially the low-threshold units. In contrast, the discharge rates of newly activated units first increased and then decreased. Such divergent behaviour of low- and high-threshold motor units could not be individually controlled by the central drive to the motoneurone pool. Rather, the different behaviours must be the consequence of variable contributions from motoneurone adaptation and afferent feedback from the muscle during the fatiguing contraction. PMID:11483719
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
NASA Astrophysics Data System (ADS)
Webb, N.; Herrick, J.; Duniway, M.
2013-12-01
This work explores how soil erosion assessments can be structured in the context of ecological sites and site dynamics to inform systems for managing accelerated soil erosion. We evaluated wind and water erosion rates for five ecological sites in southern New Mexico, USA, using monitoring data and rangeland-specific wind and water erosion models. Our results show that wind and water erosion can be highly variable within and among ecological sites. Plots in shrub-encroached and shrub-dominated states were consistently susceptible to both wind and water erosion. However, grassland plots and plots with a grass-succulent mix had a high indicated susceptibility to wind and water erosion respectively. Vegetation thresholds for controlling erosion are identified that transcend the ecological sites and their respective states. The thresholds define vegetation cover levels at which rapid (exponential) increases in erosion rates begin to occur, suggesting that erosion in the study ecosystem can be effectively controlled when bare ground cover is <20% of a site or total ground cover is >50%. Similarly, our results show that erosion can be controlled when the cover of canopy interspaces >50 cm in length reaches ~50%, the cover of canopy interspaces >100 cm in length reaches ~35% or the cover of canopy interspaces >150 cm in length reaches ~20%. This process-based understanding can be applied, along with knowledge of the differential sensitivity of vegetation states, to improve erosion management systems. Land use and management activities that alter cover levels such that they cross thresholds, and/or drive vegetation state changes, may increase the susceptibility of sites to erosion. Land use impacts that are constrained within the natural variability of sites should not result in accelerated soil erosion. Evaluating land condition against the erosion thresholds and natural variability of ecological sites will enable improved identification of where and when accelerated soil erosion occurs and the development of practical management solutions.
[The new German general threshold limit value for dust--pro and contra the adoption in Austria].
Godnic-Cvar, Jasminka; Ponocny, Ivo
2004-01-01
Since it has been realised that inhalation of inert dust is one of the important confounding variables for the development of chronic bronchitis, the threshold values for occupational exposure to these dusts needs to be further decreased. The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK-Commission) has set a new threshold (MAK-Value) for inert dusts (4 mg/m3 for inhalable dust, 1.5 mg/m3 for respirable dust) in 1997. This value is much lower than the threshold values currently used world-wide. The aim of the present article is to assess the scientific plausibility of the methodology (databases and statistics) used to set these new German MAK-Values, regarding their adoption in Austria. Although we believe that it is substantial to lower the MAK-Value for inert dust in order to prevent the development of chronic bronchitis as a consequence of occupational exposure to inert dusts, the applied methodology used by the German MAK-Commission in 1997 to set the new MAK-Values does not justify the reduction of the threshold limit value. A carefully designed study to establish an appropriate scientific basis for setting a new threshold value for inert dusts in the workplace should be carried out. Meanwhile, at least the currently internationally applied threshold values should be adopted in Austria.
Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.
Pires, J C M; Souza, A; Pavão, H G; Martins, F G
2014-09-01
The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.
Sinex, Donal G.
2013-01-01
Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Variable Shadow Screens for Imaging Optical Devices
NASA Technical Reports Server (NTRS)
Lu, Ed; Chretien, Jean L.
2004-01-01
Variable shadow screens have been proposed for reducing the apparent brightnesses of very bright light sources relative to other sources within the fields of view of diverse imaging optical devices, including video and film cameras and optical devices for imaging directly into the human eye. In other words, variable shadow screens would increase the effective dynamic ranges of such devices. Traditionally, imaging sensors are protected against excessive brightness by use of dark filters and/or reduction of iris diameters. These traditional means do not increase dynamic range; they reduce the ability to view or image dimmer features of an image because they reduce the brightness of all parts of an image by the same factor. On the other hand, a variable shadow screen would darken only the excessively bright parts of an image. For example, dim objects in a field of view that included the setting Sun or bright headlights could be seen more readily in a picture taken through a variable shadow screen than in a picture of the same scene taken through a dark filter or a narrowed iris. The figure depicts one of many potential variations of the basic concept of the variable shadow screen. The shadow screen would be a normally transparent liquid-crystal matrix placed in front of a focal-plane array of photodetectors in a charge-coupled-device video camera. The shadow screen would be placed far enough from the focal plane so as not to disrupt the focal-plane image to an unacceptable degree, yet close enough so that the out-of-focus shadows cast by the screen would still be effective in darkening the brightest parts of the image. The image detected by the photodetector array itself would be used as feedback to drive the variable shadow screen: The video output of the camera would be processed by suitable analog and/or digital electronic circuitry to generate a negative partial version of the image to be impressed on the shadow screen. The parts of the shadow screen in front of those parts of the image with brightness below a specified threshold would be left transparent; the parts of the shadow screen in front of those parts of the image where the brightness exceeded the threshold would be darkened by an amount that would increase with the excess above the threshold.
ARTS III Computer Systems Performance Measurement Prototype Implementation
DOT National Transportation Integrated Search
1974-04-01
Direct measurement of computer systems is of vital importance in: a) developing an intelligent grasp of the variables which affect overall performance; b)tuning the systsem for optimum benefit; c)determining under what conditions saturation threshold...
Climate change: The 2015 Paris Agreement thresholds and Mediterranean basin ecosystems.
Guiot, Joel; Cramer, Wolfgang
2016-10-28
The United Nations Framework Convention on Climate Change Paris Agreement of December 2015 aims to maintain the global average warming well below 2°C above the preindustrial level. In the Mediterranean basin, recent pollen-based reconstructions of climate and ecosystem variability over the past 10,000 years provide insights regarding the implications of warming thresholds for biodiversity and land-use potential. We compare scenarios of climate-driven future change in land ecosystems with reconstructed ecosystem dynamics during the past 10,000 years. Only a 1.5°C warming scenario permits ecosystems to remain within the Holocene variability. At or above 2°C of warming, climatic change will generate Mediterranean land ecosystem changes that are unmatched in the Holocene, a period characterized by recurring precipitation deficits rather than temperature anomalies. Copyright © 2016, American Association for the Advancement of Science.
Insignificant solar-terrestrial triggering of earthquakes
Love, Jeffrey J.; Thomas, Jeremy N.
2013-01-01
We examine the claim that solar-terrestrial interaction, as measured by sunspots, solar wind velocity, and geomagnetic activity, might play a role in triggering earthquakes. We count the number of earthquakes having magnitudes that exceed chosen thresholds in calendar years, months, and days, and we order these counts by the corresponding rank of annual, monthly, and daily averages of the solar-terrestrial variables. We measure the statistical significance of the difference between the earthquake-number distributions below and above the median of the solar-terrestrial averages by χ2 and Student's t tests. Across a range of earthquake magnitude thresholds, we find no consistent and statistically significant distributional differences. We also introduce time lags between the solar-terrestrial variables and the number of earthquakes, but again no statistically significant distributional difference is found. We cannot reject the null hypothesis of no solar-terrestrial triggering of earthquakes.
Saturation-state sensitivity of marine bivalve larvae to ocean acidification
NASA Astrophysics Data System (ADS)
Waldbusser, George G.; Hales, Burke; Langdon, Chris J.; Haley, Brian A.; Schrader, Paul; Brunner, Elizabeth L.; Gray, Matthew W.; Miller, Cale A.; Gimenez, Iria
2015-03-01
Ocean acidification results in co-varying inorganic carbon system variables. Of these, an explicit focus on pH and organismal acid-base regulation has failed to distinguish the mechanism of failure in highly sensitive bivalve larvae. With unique chemical manipulations of seawater we show definitively that larval shell development and growth are dependent on seawater saturation state, and not on carbon dioxide partial pressure or pH. Although other physiological processes are affected by pH, mineral saturation state thresholds will be crossed decades to centuries ahead of pH thresholds owing to nonlinear changes in the carbonate system variables as carbon dioxide is added. Our findings were repeatable for two species of bivalve larvae could resolve discrepancies in experimental results, are consistent with a previous model of ocean acidification impacts due to rapid calcification in bivalve larvae, and suggest a fundamental ocean acidification bottleneck at early life-history for some marine keystone species.
Specific conditions of distress in the dental situation.
Hentschel, U; Allander, L; Winholt, A S
1977-01-01
The general feeling of distress in the dental situation has been studied in 60 female dental patients and correlated to the following variables: Experimentally evaluated sensitivity to pain, self-rating and the dentist's rating of sensitivity to pain, the pain-threshold value in the teeth, the need of local anesthesia, extraversion-introversion, neuroticism, and some percept-genetic psychological measures of adaptive behavior. The subjects have also answered a questionnaire for grading their distress in regard to different aspects of the treatment-situation, which were combined into eight groups using factor analysis and then correlated to the general distress. The variables having a significant relation to distress in the dental situation were: the dentist's rating of the patient's sensitivity, the need of anesthesia, four groups of treatment-components and two of the percept-genetic measures. There was also a certain relation to the pain threshold in the teeth.
Riis, R G C; Gudbergsen, H; Simonsen, O; Henriksen, M; Al-Mashkur, N; Eld, M; Petersen, K K; Kubassova, O; Bay Jensen, A C; Damm, J; Bliddal, H; Arendt-Nielsen, L; Boesen, M
2017-02-01
To investigate the association between magnetic resonance imaging (MRI), macroscopic and histological assessments of synovitis in end-stage knee osteoarthritis (KOA). Synovitis of end-stage osteoarthritic knees was assessed using non-contrast-enhanced (CE), contrast-enhanced magnetic resonance imaging (CE-MRI) and dynamic contrast-enhanced (DCE)-MRI prior to (TKR) and correlated with microscopic and macroscopic assessments of synovitis obtained intraoperatively. Multiple bivariate correlations were used with a pre-specified threshold of 0.70 for significance. Also, multiple regression analyses with different subsets of MRI-variables as explanatory variables and the histology score as outcome variable were performed with the intention to find MRI-variables that best explain the variance in histological synovitis (i.e., highest R 2 ). A stepped approach was taken starting with basic characteristics and non-CE MRI-variables (model 1), after which CE-MRI-variables were added (model 2) with the final model also including DCE-MRI-variables (model 3). 39 patients (56.4% women, mean age 68 years, Kellgren-Lawrence (KL) grade 4) had complete MRI and histological data. Only the DCE-MRI variable MExNvoxel (surrogate of the volume and degree of synovitis) and the macroscopic score showed correlations above the pre-specified threshold for acceptance with histological inflammation. The maximum R 2 -value obtained in Model 1 was R 2 = 0.39. In Model 2, where the CE-MRI-variables were added, the highest R 2 = 0.52. In Model 3, a four-variable model consisting of the gender, one CE-MRI and two DCE-MRI-variables yielded a R 2 = 0.71. DCE-MRI is correlated with histological synovitis in end-stage KOA and the combination of CE and DCE-MRI may be a useful, non-invasive tool in characterising synovitis in KOA. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raúl Román Fernández, José; Rodríguez-Caballero, Emilio; Chamizo de la Piedra, Sonia; Roncero Ramos, Bea; Cantón Castilla, Yolanda
2017-04-01
Biological soil crusts (biocrusts) are spatially variable components of soil. Whereas biogeographic, climatic or soil properties drive biocrust distribution from regional to global scales, biocrust spatial distribution within the landscape is controlled by topographic forces that create specific microhabitats that promote or difficult biocrust growth. By knowing which are the variables that control biocrust distribution and their individual effect we can establish the abiotic thresholds that limit natural biocrust colonization on different environments, which may be very useful for designing soil restoration programmes. The objective of this study was to analyse the influence of topographic-related variables in the distribution of different types of biocrust within a semiarid catchment where cyanobacteria and lichen dominated biocrust represent the most important surface components, El Cautivo experimental area (SE Spain). To do this, natural coverage of i) bare soil, ii) vegetation, iii) cyanobacteria-dominated soil crust and iv) lichen-dominated soil crust were measured on 70 experimental plots distributed across 23 transect (three 4.5 x 4.5 m plots per transect). Following that, we used a 1m x 1m DEM (Digital Elevation Model) of the study site obtained from a LiDAR point cloud to calculate different topographic variables such as slope gradient, length slope (LS) factor (potential sediment transport index), potential incoming solar radiation, topographic wetness index (WI) and maximum flow accumulation. Canonical Correspondence Analysis was performed to infer the influence of each variable in the coverage of each class and thresholds of biocrust colonization were identified mathematically by means of linear regression analysis describing the relationship between each factor and biocrust cover. Our results show that the spatial distribution of cyanobacteria-dominated biocrust, which showed physiological and morphological adaptation to cope with drought and UVA radiation, was mostly controlled by incoming solar radiation, being mostly located on areas with high incoming solar radiation and low slope, showing a threshold at 48 degrees from which they are not found. Lichen-dominated biocrust, on the other hand, colonize the uppermost and steepest part of north aspect hillslopes where incoming solar radiation and ETP are low, as consequence of their lower capacity to survive under extreme temperatures and drought conditions. With higher capacity of the soil to retain run-on (WI), surface is mostly cover by plants instead of lichens. Bare soil distribution is controlled by the combination of two factors, slope and solar radiation, covering the south aspect hillslopes, where slope gradient is high and there is high incoming solar radiation and ETP for lichen colonization.
The interplay between cooperativity and diversity in model threshold ensembles
Cervera, Javier; Manzanares, José A.; Mafe, Salvador
2014-01-01
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516
Circuit design advances for ultra-low power sensing platforms
NASA Astrophysics Data System (ADS)
Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis
2010-04-01
This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.
Effect of variable tinted spectacle lenses on visual performance in control subjects.
Lee, Jason E; Stein, Jonathan J; Prevor, Meredith B; Seiple, William H; Holopigian, Karen; Greenstein, Vivienne C; Stenson, Susan M
2002-04-01
To evaluate quantitatively the effects of tinted spectacle lenses on visual performance in individuals without visual pathology. Twenty-five subjects were assessed by measuring contrast sensitivity with and without glare. Gray, brown, yellow, green, purple, and blue lens tints were evaluated. Measurements were repeated with each lens tint and with a clear lens, and the order was counterbalanced within and between subjects. Glare was induced with a modified brightness acuity tester. All subjects demonstrated an increase in contrast thresholds under glare conditions for all lens tints. However, purple and blue lens tints resulted in the least amount of contrast threshold increase; the yellow lens tint resulted in the largest contrast threshold increase. Purple and blue lens tints may improve contrast sensitivity in control subjects under glare conditions.
Dynamics of chromatic visual system processing differ in complexity between children and adults.
Boon, Mei Ying; Suttle, Catherine M; Henry, Bruce I; Dain, Stephen J
2009-06-30
Measures of chromatic contrast sensitivity in children are lower than those of adults. This may be related to immaturities in signal processing at or near threshold. We have found that children's VEPs in response to low contrast supra-threshold chromatic stimuli are more intra-individually variable than those recorded from adults. Here, we report on linear and nonlinear analyses of chromatic VEPs recorded from children and adults. Two measures of signal-to-noise ratio are similar between the adults and children, suggesting that relatively high noise is unlikely to account for the poor clarity of negative and positive peak components in the children's VEPs. Nonlinear analysis indicates higher complexity of adults' than children's chromatic VEPs, at levels of chromatic contrast around and well above threshold.
NASA Astrophysics Data System (ADS)
Hoss, F.; Fischbeck, P. S.
2014-10-01
This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.
Variable intertidal temperature explains why disease endangers black abalone
Ben-Horin, Tal; Lenihan, Hunter S.; Lafferty, Kevin D.
2013-01-01
Epidemiological theory suggests that pathogens will not cause host extinctions because agents of disease should fade out when the host population is driven below a threshold density. Nevertheless, infectious diseases have threatened species with extinction on local scales by maintaining high incidence and the ability to spread efficiently even as host populations decline. Intertidal black abalone (Haliotis cracherodii), but not other abalone species, went extinct locally throughout much of southern California following the emergence of a Rickettsiales-like pathogen in the mid-1980s. The rickettsial disease, a condition known as withering syndrome (WS), and associated mortality occur at elevated water temperatures. We measured abalone body temperatures in the field and experimentally manipulated intertidal environmental conditions in the laboratory, testing the influence of mean temperature and daily temperature variability on key epizootiological processes of WS. Daily temperature variability increased the susceptibility of black abalone to infection, but disease expression occurred only at warm water temperatures and was independent of temperature variability. These results imply that high thermal variation of the marine intertidal zone allows the pathogen to readily infect black abalone, but infected individuals remain asymptomatic until water temperatures periodically exceed thresholds modulating WS. Mass mortalities can therefore occur before pathogen transmission is limited by density-dependent factors.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Definition of Pluviometric Thresholds For A Real Time Flood Forecasting System In The Arno Watershed
NASA Astrophysics Data System (ADS)
Amadio, P.; Mancini, M.; Mazzetti, P.; Menduni, G.; Nativi, S.; Rabuffetti, D.; Ravazzani, G.; Rosso, R.
The pluviometric flood forecasting thresholds are an easy method that helps river flood emergency management collecting data from limited area meteorologic model or telemetric raingauges. The thresholds represent the cumulated rainfall depth which generate critic discharge for a particular section. The thresholds were calculated for different sections of Arno river and for different antecedent moisture condition using the flood event distributed hydrologic model FEST. The model inputs were syntethic hietographs with different shape and duration. The system realibility has been verified by generating 500 year syntethic rainfall for 3 important subwatersheds of the studied area. A new technique to consider spatial variability of rainfall and soil properties effects on hydrograph has been investigated. The "Geomorphologic Weights" were so calculated. The alarm system has been implemented in a dedicated software (MIMI) that gets measured and forecast rainfall data from Autorità di Bacino and defines the state of the alert of the river sections.
Rainfall thresholds for possible landslide occurrence in Italy
NASA Astrophysics Data System (ADS)
Peruccacci, Silvia; Brunetti, Maria Teresa; Gariano, Stefano Luigi; Melillo, Massimo; Rossi, Mauro; Guzzetti, Fausto
2017-08-01
The large physiographic variability and the abundance of landslide and rainfall data make Italy an ideal site to investigate variations in the rainfall conditions that can result in rainfall-induced landslides. We used landslide information obtained from multiple sources and rainfall data captured by 2228 rain gauges to build a catalogue of 2309 rainfall events with - mostly shallow - landslides in Italy between January 1996 and February 2014. For each rainfall event with landslides, we reconstructed the rainfall history that presumably caused the slope failure, and we determined the corresponding rainfall duration D (in hours) and cumulated event rainfall E (in mm). Adopting a power law threshold model, we determined cumulated event rainfall-rainfall duration (ED) thresholds, at 5% exceedance probability, and their uncertainty. We defined a new national threshold for Italy, and 26 regional thresholds for environmental subdivisions based on topography, lithology, land-use, land cover, climate, and meteorology, and we used the thresholds to study the variations of the rainfall conditions that can result in landslides in different environments, in Italy. We found that the national and the environmental thresholds cover a small part of the possible DE domain. The finding supports the use of empirical rainfall thresholds for landslide forecasting in Italy, but poses an empirical limitation to the possibility of defining thresholds for small geographical areas. We observed differences between some of the thresholds. With increasing mean annual precipitation (MAP), the thresholds become higher and steeper, indicating that more rainfall is needed to trigger landslides where the MAP is high than where it is low. This suggests that the landscape adjusts to the regional meteorological conditions. We also observed that the thresholds are higher for stronger rocks, and that forested areas require more rainfall than agricultural areas to initiate landslides. Finally, we observed that a 20% exceedance probability national threshold was capable of predicting all the rainfall-induced landslides with casualties between 1996 and 2014, and we suggest that this threshold can be used to forecast fatal rainfall-induced landslides in Italy. We expect the method proposed in this work to define and compare the thresholds to have an impact on the definition of new rainfall thresholds for possible landslide occurrence in Italy, and elsewhere.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms.
Spierer, Oriel; Felix, Elizabeth R; McClellan, Allison L; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J; Sarantopoulos, Constantine D; Levitt, Roy C; Ehrmann, Klaus; Galor, Anat
2016-02-01
To examine associations between corneal mechanical thresholds and metrics of dry eye. This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = -0.13 to -0.27, P < 0.05 for values between -0.18 and -0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = -0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms
Spierer, Oriel; Felix, Elizabeth R.; McClellan, Allison L.; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J.; Sarantopoulos, Constantine D.; Levitt, Roy C.; Ehrmann, Klaus; Galor, Anat
2016-01-01
Purpose To examine associations between corneal mechanical thresholds and metrics of dry eye. Methods This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. Results A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = −0.13 to −0.27, P < 0.05 for values between −0.18 and −0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = −0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Conclusions Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints. PMID:26886896
Threshold dose for discrimination of nicotine via cigarette smoking.
Perkins, Kenneth A; Kunkle, Nicole; Karelitz, Joshua L; Michael, Valerie C; Donny, Eric C
2016-06-01
The lowest nicotine threshold "dose" in cigarettes discriminated from a cigarette containing virtually no nicotine may help inform the minimum dose maintaining dependence. Spectrum research cigarettes (from NIDA) differing in nicotine content were used to evaluate a procedure to determine discrimination thresholds. Dependent smokers (n = 18; 13 M, 5 F) were tested on ability to discriminate cigarettes with nicotine contents of 11, 5, 2.4, and 1.3 mg/g, one per session, from the "ultralow" cigarette with 0.4 mg/g, after having discriminated 16 mg/g from 0.4 mg/g (all had 9-10 mg "tar"). Exposure to each was limited to 4 puffs/trial. All subjects were abstinent from smoking overnight prior to each session, and the number of sessions was determined by the participant's success in discrimination behavior on >80 % of trials. Subjective perceptions and behavioral choice between cigarettes were also assessed and related to discrimination behavior. The median threshold was 11 mg/g, but the range was 2.4 to 16 mg/g, suggesting wide variability in discrimination threshold. Compared to the ultralow, puff choice was greater for the subject's threshold dose but only marginal for the subthreshold (next lowest nicotine) cigarette. Threshold and subthreshold also differed on subjective perceptions but not withdrawal relief. Under these testing conditions, threshold content for discriminating nicotine via cigarettes may be 11 mg/g or greater for most smokers, but some can discriminate nicotine contents one-half or one-quarter this amount. Further study with other procedures and cigarette exposure amounts may identify systematic differences in nicotine discrimination thresholds.
NASA Astrophysics Data System (ADS)
Nield, Joanna M.; McKenna Neuman, Cheryl; O'Brien, Patrick; Bryant, Robert G.; Wiggs, Giles F. S.
2016-12-01
Playas (or ephemeral lakes) can be significant sources of dust, but they are typically covered by salt crusts of variable mineralogy and these introduce uncertainty into dust emission predictions. Despite the importance of crust mineralogy to emission potential, little is known about (i) the effect of short-term changes in temperature and relative humidity on the erodibility of these crusts, and (ii) the influence of crust degradation and mineralogy on wind speed threshold for dust emission. Our understanding of systems where emission is not driven by impacts from saltators is particularly poor. This paper describes a wind tunnel study in which dust emission in the absence of saltating particles was measured for a suite of climatic conditions and salt crust types commonly found on Sua Pan, Botswana. The crusts were found to be non-emissive under climate conditions characteristic of dawn and early morning, as compared to hot and dry daytime conditions when the wind speed threshold for dust emission appears to be highly variable, depending upon salt crust physicochemistry. Significantly, sodium sulphate rich crusts were found to be more emissive than crusts formed from sodium chloride, while degraded versions of both crusts had a lower emission threshold than fresh, continuous crusts. The results from this study are in agreement with in-situ field measurements and confirm that dust emission from salt crusted surfaces can occur without saltation, although the vertical fluxes are orders of magnitude lower (∼10 μg/m/s) than for aeolian systems where entrainment is driven by particle impact.
Stress/strain changes and triggered seismicity at The Geysers, California
Gomberg, J.; Davis, S.
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Stress/strain changes and triggered seismicity at The Geysers, California
NASA Astrophysics Data System (ADS)
Gomberg, Joan; Davis, Scott
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
The fragmentation threshold and implications for explosive eruptions
NASA Astrophysics Data System (ADS)
Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.
2003-04-01
The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.
Computer system performance measurement techniques for ARTS III computer systems.
DOT National Transportation Integrated Search
1973-12-01
Direct measurement of computer systems is of vital importance in: a) developing an intelligent grasp of the variables which affect overall performance; b)tuning the system for optimum benefit; c)determining under what conditions saturation thresholds...
Getting the message across: using ecological integrity to communicate with resource managers
Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.
2014-01-01
This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huajun; Dong, Yongqi; Cherukara, Matthew J.
Memristive devices are an emerging technology that enables both rich interdisciplinary science and novel device functionalities, such as nonvolatile memories and nanoionics-based synaptic electronics. Recent work has shown that the reproducibility and variability of the devices depend sensitively on the defect structures created during electroforming as well as their continued evolution under dynamic electric fields. However, a fundamental principle guiding the material design of defect structures is still lacking due to the difficulty in understanding dynamic defect behavior under different resistance states. Here, we unravel the existence of threshold behavior by studying model, single-crystal devices: resistive switching requires that themore » pristine oxygen vacancy concentration reside near a critical value. Theoretical calculations show that the threshold oxygen vacancy concentration lies at the boundary for both electronic and atomic phase transitions. Through operando, multimodal X-ray imaging, we show that field tuning of the local oxygen vacancy concentration below or above the threshold value is responsible for switching between different electrical states. These results provide a general strategy for designing functional defect structures around threshold concentrations to create dynamic, field-controlled phases for memristive devices.« less
Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill
2014-04-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.
Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill
2014-01-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013
Temporal clustering of floods in Germany: Do flood-rich and flood-poor periods exist?
NASA Astrophysics Data System (ADS)
Merz, Bruno; Nguyen, Viet Dung; Vorogushyn, Sergiy
2016-10-01
The repeated occurrence of exceptional floods within a few years, such as the Rhine floods in 1993 and 1995 and the Elbe and Danube floods in 2002 and 2013, suggests that floods in Central Europe may be organized in flood-rich and flood-poor periods. This hypothesis is studied by testing the significance of temporal clustering in flood occurrence (peak-over-threshold) time series for 68 catchments across Germany for the period 1932-2005. To assess the robustness of the results, different methods are used: Firstly, the index of dispersion, which quantifies the departure from a homogeneous Poisson process, is investigated. Further, the time-variation of the flood occurrence rate is derived by non-parametric kernel implementation and the significance of clustering is evaluated via parametric and non-parametric tests. Although the methods give consistent overall results, the specific results differ considerably. Hence, we recommend applying different methods when investigating flood clustering. For flood estimation and risk management, it is of relevance to understand whether clustering changes with flood severity and time scale. To this end, clustering is assessed for different thresholds and time scales. It is found that the majority of catchments show temporal clustering at the 5% significance level for low thresholds and time scales of one to a few years. However, clustering decreases substantially with increasing threshold and time scale. We hypothesize that flood clustering in Germany is mainly caused by catchment memory effects along with intra- to inter-annual climate variability, and that decadal climate variability plays a minor role.
Motor control theories and their applications.
Latash, Mark L; Levin, Mindy F; Scholz, John P; Schöner, Gregor
2010-01-01
We describe several influential hypotheses in the field of motor control including the equilibrium-point (referent configuration) hypothesis, the uncontrolled manifold hypothesis, and the idea of synergies based on the principle of motor abundance. The equilibrium-point hypothesis is based on the idea of control with thresholds for activation of neuronal pools; it provides a framework for analysis of both voluntary and involuntary movements. In particular, control of a single muscle can be adequately described with changes in the threshold of motor unit recruitment during slow muscle stretch (threshold of the tonic stretch reflex). Unlike the ideas of internal models, the equilibrium-point hypothesis does not assume neural computations of mechanical variables. The uncontrolled manifold hypothesis is based on the dynamic system approach to movements; it offers a toolbox to analyze synergic changes within redundant sets of elements related to stabilization of potentially important performance variables. The referent configuration hypothesis and the principle of abundance can be naturally combined into a single coherent scheme of control of multi-element systems. A body of experimental data on healthy persons and patients with movement disorders are reviewed in support of the mentioned hypotheses. In particular, movement disorders associated with spasticity are considered as consequences of an impaired ability to shift threshold of the tonic stretch reflex within the whole normal range. Technical details and applications of the mentioned hypo-theses to studies of motor learning are described. We view the mentioned hypotheses as the most promising ones in the field of motor control, based on a solid physical and neurophysiological foundation.
Modeling Source Water Threshold Exceedances with Extreme Value Theory
NASA Astrophysics Data System (ADS)
Rajagopalan, B.; Samson, C.; Summers, R. S.
2016-12-01
Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.
Digital music exposure reliably induces temporary threshold shift in normal-hearing human subjects.
Le Prell, Colleen G; Dell, Shawna; Hensley, Brittany; Hall, James W; Campbell, Kathleen C M; Antonelli, Patrick J; Green, Glenn E; Miller, James M; Guire, Kenneth
2012-01-01
One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is the availability of an established clinical paradigm with real-world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal-hearing human subjects. Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93 to 95 (n = 10), 98 to 100 (n = 11), or 100 to 102 (n = 12) dBA in-ear exposure level for a period of 4 hr. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured before and after music exposure. Postmusic tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and 1 week later. Changes in thresholds after the lowest-level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a "notch" configuration, with the largest changes observed at 4 kHz (mean = 6.3 ± 3.9 dB; range = 0-14 dB). Recovery was largely complete within the first 4 hr postexposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1 week postexposure. These data provide insight into the variability of TTS induced by music-player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function after digital music-player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be taken to fully inform potential subjects in future TTS studies, including protective agent evaluations, that some noise exposures have resulted in neural degeneration in animal models, even when both audiometric thresholds and DPOAE levels returned to pre-exposure values.
Elliott, Grant P
2012-07-01
Given the widespread and often dramatic influence of climate change on terrestrial ecosystems, it is increasingly common for abrupt threshold changes to occur, yet explicitly testing for climate and ecological regime shifts is lacking in climatically sensitive upper treeline ecotones. In this study, quantitative evidence based on empirical data is provided to support the key role of extrinsic, climate-induced thresholds in governing the spatial and temporal patterns of tree establishment in these high-elevation environments. Dendroecological techniques were used to reconstruct a 420-year history of regeneration dynamics within upper treeline ecotones along a latitudinal gradient (approximately 44-35 degrees N) in the Rocky Mountains. Correlation analysis was used to assess the possible influence of minimum and maximum temperature indices and cool-season (November-April) precipitation on regional age-structure data. Regime-shift analysis was used to detect thresholds in tree establishment during the entire period of record (1580-2000), temperature variables significantly Correlated with establishment during the 20th century, and cool-season precipitation. Tree establishment was significantly correlated with minimum temperature during the spring (March-May) and cool season. Regime-shift analysis identified an abrupt increase in regional tree establishment in 1950 (1950-1954 age class). Coincident with this period was a shift toward reduced cool-season precipitation. The alignment of these climate conditions apparently triggered an abrupt increase in establishment that was unprecedented during the period of record. Two main findings emerge from this research that underscore the critical role of climate in governing regeneration dynamics within upper treeline ecotones. (1) Regional climate variability is capable of exceeding bioclimatic thresholds, thereby initiating synchronous and abrupt changes in the spatial and temporal patterns of tree establishment at broad regional scales. (2) The importance of climate parameters exceeding critical threshold values and triggering a regime shift in tree establishment appears to be contingent on the alignment of favorable temperature and moisture regimes. This research suggests that threshold changes in the climate system can fundamentally alter regeneration dynamics within upper treeline ecotones and, through the use of regime-shift analysis, reveals important climate-vegetation linkages.
Bryant, M; Santorelli, G; Lawlor, D A; Farrar, D; Tuffnell, D; Bhopal, R; Wright, J
2014-03-01
To describe how maternal obesity prevalence varies by established international and South Asian specific body mass index (BMI) cut-offs in women of Pakistani origin and investigate whether different BMI thresholds can help to identify women at risk of adverse pregnancy and birth outcomes. Prospective bi-ethnic birth cohort study (the Born in Bradford (BiB) cohort). Bradford, a deprived city in the North of the UK. A total of 8478 South Asian and White British pregnant women participated in the BiB cohort study. Maternal obesity prevalence; prevalence of known obesity-related adverse pregnancy outcomes: mode of birth, hypertensive disorders of pregnancy (HDP), gestational diabetes, macrosomia and pre-term births. Application of South Asian BMI cut-offs increased prevalence of obesity in Pakistani women from 18.8 (95% confidence interval (CI) 17.6-19.9) to 30.9% (95% CI 29.5-32.2). With the exception of pre-term births, there was a positive linear relationship between BMI and prevalence of adverse pregnancy and birth outcomes, across almost the whole BMI distribution. Risk of gestational diabetes and HDP increased more sharply in Pakistani women after a BMI threshold of at least 30 kg m(-2), but there was no evidence of a sharp increase in any risk factors at the new, lower thresholds suggested for use in South Asian women. BMI was a good single predictor of outcomes (area under the receiver operating curve: 0.596-0.685 for different outcomes); prediction was more discriminatory and accurate with BMI as a continuous variable than as a binary variable for any possible cut-off point. Applying the new South Asian threshold to pregnant women would markedly increase those who were referred for monitoring and lifestyle advice. However, our results suggest that lowering the BMI threshold in South Asian women would not improve the predictive ability for identifying those who were at risk of adverse pregnancy outcomes.
Herlitz, Georg N; Arlow, Renee L; Cheung, Nora H; Coyle, Susette M; Griffel, Benjamin; Macor, Marie A; Lowry, Stephen F; Calvano, Steve E; Gale, Stephen C
2015-02-01
Human injury or infection induces systemic inflammation with characteristic neuroendocrine responses. Fluctuations in autonomic function during inflammation are reflected by beat-to-beat variation in heart rate, termed heart rate variability (HRV). In the present study, we determine threshold doses of endotoxin needed to induce observable changes in markers of systemic inflammation, investigate whether metrics of HRV exhibit a differing threshold dose from other inflammatory markers, and investigate the size of data sets required for meaningful use of multiscale entropy (MSE) analysis of HRV. Healthy human volunteers (n = 25) were randomized to receive placebo (normal saline) or endotoxin/lipopolysaccharide (LPS): 0.1, 0.25, 0.5, 1.0, or 2.0 ng/kg administered intravenously. Vital signs were recorded every 30 min for 6 h and then at 9, 12, and 24 h after LPS. Blood samples were drawn at specific time points for cytokine measurements. Heart rate variability analysis was performed using electrocardiogram epochs of 5 min. Multiscale entropy for HRV was calculated for all dose groups to scale factor 40. The lowest significant threshold dose was noted in core temperature at 0.25 ng/kg. Endogenous tumor necrosis factor α and interleukin 6 were significantly responsive at the next dosage level (0.5 ng/kg) along with elevations in circulating leukocytes and heart rate. Responses were exaggerated at higher doses (1 and 2 ng/kg). Time domain and frequency domain HRV metrics similarly suggested a threshold dose, differing from placebo at 1.0 and 2.0 ng/kg, below which no clear pattern in response was evident. By applying repeated-measures analysis of variance across scale factors, a significant decrease in MSE was seen at 1.0 and 2.0 ng/kg by 2 h after exposure to LPS. Although not statistically significant below 1.0 ng/kg, MSE unexpectedly decreased across all groups in an orderly dose-response pattern not seen in the other outcomes. By using repeated-measures analysis of variance across scale factors, MSE can detect autonomic change after LPS challenge in a group of 25 subjects using electrocardiogram epochs of only 5 min and entropy analysis to scale factor of only 40, potentially facilitating MSE's wider use as a research tool or bedside monitor. Traditional markers of inflammation generally exhibit threshold dose behavior. In contrast, MSE's apparent continuous dose-response pattern, although not statistically verifiable in this study, suggests a potential subclinical harbinger of infectious or other insult. The possible derangement of autonomic complexity prior to or independent of the cytokine surge cannot be ruled out. Future investigation should focus on confirmation of overt inflammation following observed decreases in MSE in a clinical setting.
Towards developing drought impact functions to advance drought monitoring and early warning
NASA Astrophysics Data System (ADS)
Bachmair, Sophie; Stahl, Kerstin; Hannaford, Jamie; Svoboda, Mark
2015-04-01
In natural hazard analysis, damage functions (also referred to as vulnerability or susceptibility functions) relate hazard intensity to the negative effects of the hazard event, often expressed as damage ratio or monetary loss. While damage functions for floods and seismic hazards have gained considerable attention, there is little knowledge on how drought intensity translates into ecological and socioeconomic impacts. One reason for this is the multifaceted nature of drought affecting different domains of the hydrological cycle and different sectors of human activity (for example, recognizing meteorological - agricultural - hydrological - socioeconomic drought) leading to a wide range of drought impacts. Moreover, drought impacts are often non-structural and hard to quantify or monetarize (e.g. impaired navigability of streams, bans on domestic water use, increased mortality of aquatic species). Knowledge on the relationship between drought intensity and drought impacts, i.e. negative environmental, economic or social effects experienced under drought conditions, however, is vital to identify critical thresholds for drought impact occurrence. Such information may help to improve drought monitoring and early warning (M&EW), one goal of the international DrIVER project (Drought Impacts: Vulnerability thresholds in monitoring and Early-warning Research). The aim of this study is to test the feasibility of designing "drought impact functions" for case study areas in Europe (Germany and UK) and the United States to derive thresholds meaningful for drought impact occurrence; to account for the multidimensionality of drought impacts, we use the broader term "drought impact function" over "damage function". First steps towards developing empirical drought impact functions are (1) to identify meaningful indicators characterizing the hazard intensity (e.g. indicators expressing a precipitation or streamflow deficit), (2) to identify suitable variables representing impacts, damage, or loss due to drought, and (3) to test different statistical models to link drought intensity with drought impact information to derive meaningful thresholds. While the focus regarding drought impact variables lies on text-based impact reports from the European Drought Impact report Inventory (EDII) and the US Drought Impact Reporter (DIR), the information gain through exploiting other variables such as agricultural yield statistics and remotely sensed vegetation indices is explored. First results reveal interesting insights into the complex relationship between drought indicators and impacts and highlight differences among drought impact variables and geographies. Although a simple intensity threshold evoking specific drought impacts cannot be identified, developing drought impact functions helps to elucidate how drought conditions relate to ecological or socioeconomic impacts. Such knowledge may provide guidance for inferring meaningful triggers for drought M&EW and could have potential for a wide range of drought management applications (for example, building drought scenarios for testing the resilience of drought plans or water supply systems).
Cohen, S L; Richardson, J; Klebez, J; Febbo, S; Tucker, D
2001-09-01
Biofeedback was used to increase forearm-muscle tension. Feedback was delivered under continuous reinforcement (CRF), variable interval (VI), fixed interval (FI), variable ratio (VR), and fixed ratio (FR) schedules of reinforcement when college students increased their muscle tension (electromyograph, EMG) above a high threshold. There were three daily sessions of feedback, and Session 3 was immediately followed by a session without feedback (extinction). The CRF schedule resulted in the highest EMG, closely followed by the FR and VR schedules, and the lowest EMG scores were produced by the FI and VI schedules. Similarly, the CRF schedule resulted in the greatest amount of time-above-threshold and the VI and FI schedules produced the lowest time-above-threshold. The highest response rates were generated by the FR schedule, followed by the VR schedule. The CRF schedule produced relatively low response rates, comparable to the rates under the VI and FI schedules. Some of the data are consistent with the partial-reinforcement-extinction effect. The present data suggest that different schedules of feedback should be considered in muscle-strengthening-contexts such as during the rehabilitation of muscles following brain damage or peripheral nervous-system injury.
Warner, Kelly L.; Arnold, Terri L.
2010-01-01
Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.
Polarization squeezing of light by single passage through an atomic vapor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barreiro, S.; Valente, P.; Failache, H.
We have studied relative-intensity fluctuations for a variable set of orthogonal elliptic polarization components of a linearly polarized laser beam traversing a resonant {sup 87}Rb vapor cell. Significant polarization squeezing at the threshold level (-3dB) required for the implementation of several continuous-variable quantum protocols was observed. The extreme simplicity of the setup, which is based on standard polarization components, makes it particularly convenient for quantum information applications.
Stoops, Janelle; Byrd, Samantha; Hasegawa, Haruki
2012-10-01
Russell bodies are intracellular aggregates of immunoglobulins. Although the mechanism of Russell body biogenesis has been extensively studied by using truncated mutant heavy chains, the importance of the variable domain sequences in this process and in immunoglobulin biosynthesis remains largely unknown. Using a panel of structurally and functionally normal human immunoglobulin Gs, we show that individual immunoglobulin G clones possess distinctive Russell body inducing propensities that can surface differently under normal and abnormal cellular conditions. Russell body inducing predisposition unique to each immunoglobulin G clone was corroborated by the intrinsic physicochemical properties encoded in the heavy chain variable domain/light chain variable domain sequence combinations that define each immunoglobulin G clone. While the sequence based intrinsic factors predispose certain immunoglobulin G clones to be more prone to induce Russell bodies, extrinsic factors such as stressful cell culture conditions also play roles in unmasking Russell body propensity from immunoglobulin G clones that are normally refractory to developing Russell bodies. By taking advantage of heterologous expression systems, we dissected the roles of individual subunit chains in Russell body formation and examined the effect of non-cognate subunit chain pair co-expression on Russell body forming propensity. The results suggest that the properties embedded in the variable domain of individual light chain clones and their compatibility with the partnering heavy chain variable domain sequences underscore the efficiency of immunoglobulin G biosynthesis, the threshold for Russell body induction, and the level of immunoglobulin G secretion. We propose that an interplay between the unique properties encoded in variable domain sequences and the state of protein homeostasis determines whether an immunoglobulin G expressing cell will develop the Russell body phenotype in a dynamic cellular setting. Copyright © 2012 Elsevier B.V. All rights reserved.
Alberton, C L; Kanitz, A C; Pinto, S S; Antunes, A H; Finatto, P; Cadore, E L; Kruel, L F M
2013-08-01
The aim of this study was to compare the cardiorespiratory variables corresponding to the anaerobic threshold (AT) between different water-based exercises using two methods of determining the AT, the heart rate deflection point and ventilatory method, and to correlate the variables in both methods. Twenty young women performed three exercise sessions in the water. Maximal tests were performed in the water-based exercises stationary running, frontal kick and cross country skiing. The protocol started at a rate of 80 cycles per minute (cycle.min-1) for 2 min with subsequent increments of 10 cycle.min-1 every minute until exhaustion, with measurements of heart rate, oxygen uptake and ventilation throughout test. After, the two methods were used to determine the values of these variables corresponding to the AT for each of the exercises. Comparisons were made using two-way ANOVA for repeated measures with Bonferroni's post hoc test. To correlate the same variables determined by the two methods, the intra-class correlation coefficient test (ICC) was used. For all the variables, no significant differences were found between the methods of determining the AT and the three exercises. Moreover, the ICC values of each variable determined by the two methods were high and significant. The estimation of the heart rate deflection point can be used as a simple and practical method of determining the AT, which can be used when prescribing these exercises. In addition, these cardiorespiratory parameters may be determined performing the test with only one of the evaluated exercises, since there were no differences in the evaluated variables.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
Geneletti, Sara; O'Keeffe, Aidan G; Sharples, Linda D; Richardson, Sylvia; Baio, Gianluca
2015-07-10
The regression discontinuity (RD) design is a quasi-experimental design that estimates the causal effects of a treatment by exploiting naturally occurring treatment rules. It can be applied in any context where a particular treatment or intervention is administered according to a pre-specified rule linked to a continuous variable. Such thresholds are common in primary care drug prescription where the RD design can be used to estimate the causal effect of medication in the general population. Such results can then be contrasted to those obtained from randomised controlled trials (RCTs) and inform prescription policy and guidelines based on a more realistic and less expensive context. In this paper, we focus on statins, a class of cholesterol-lowering drugs, however, the methodology can be applied to many other drugs provided these are prescribed in accordance to pre-determined guidelines. Current guidelines in the UK state that statins should be prescribed to patients with 10-year cardiovascular disease risk scores in excess of 20%. If we consider patients whose risk scores are close to the 20% risk score threshold, we find that there is an element of random variation in both the risk score itself and its measurement. We can therefore consider the threshold as a randomising device that assigns statin prescription to individuals just above the threshold and withholds it from those just below. Thus, we are effectively replicating the conditions of an RCT in the area around the threshold, removing or at least mitigating confounding. We frame the RD design in the language of conditional independence, which clarifies the assumptions necessary to apply an RD design to data, and which makes the links with instrumental variables clear. We also have context-specific knowledge about the expected sizes of the effects of statin prescription and are thus able to incorporate this into Bayesian models by formulating informative priors on our causal parameters. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Lead exposure potentiates predatory attack behavior in the cat.
Li, Wenjie; Han, Shenggao; Gregg, Thomas R; Kemp, Francis W; Davidow, Amy L; Louria, Donald B; Siegel, Allan; Bogden, John D
2003-07-01
Epidemiologic studies have demonstrated that environmental lead exposure is associated with aggressive behavior in children; however, numerous confounding variables limit the ability of these studies to establish a causal relationship. The study of aggressive behavior using a validated animal model was used to test the hypothesis that there is a causal relationship between lead exposure and aggression in the absence of confounding variables. We studied the effects of lead exposure on a feline model of aggression: predatory (quiet biting) attack of an anesthetized rat. Five cats were stimulated with a precisely controlled electrical current via electrodes inserted into the lateral hypothalamus. The response measure was the predatory attack threshold current (i.e., the current required to elicit an attack response on 50% of the trials). Blocks of trials were administered in which predatory attack threshold currents were measured three times a week for a total of 6-10 weeks, including before, during, and after lead exposure. Lead was incorporated into cat food "treats" at doses of 50-150 mg/kg/day. Two of the five cats received a second period of lead exposure. Blood lead concentrations were measured twice a week and were <1, 21-77, and <20 micro g/dL prior to, during, and after lead exposure, respectively. The predatory attack threshold decreased significantly during initial lead exposure in three of five cats and increased after the cessation of lead exposure in four of the five cats (P<0.01). The predatory attack thresholds and blood lead concentrations for each cat were inversely correlated (r=-0.35 to -0.74). A random-effects mixed model demonstrated a significant (P=0.0019) negative association between threshold current and blood lead concentration. The data of this study demonstrate that lead exposure enhances predatory aggression in the cat and provide experimental support for a causal relationship between lead exposure and aggressive behavior in humans.
Seabirds as indicators of marine food supplies: Cairns revisited
Piatt, John F.; Harding, Ann M.A.; Shultz, Michael T.; Speckman, Suzann G.; van Pelt, Thomas I.; Drew, Gary S.; Kettle, Arthur B.
2007-01-01
In his seminal paper about using seabirds as indicators of marine food supplies, Cairns (1987, Biol Oceanogr 5:261–271) predicted that (1) parameters of seabird biology and behavior would vary in curvilinear fashion with changes in food supply, (2) the threshold of prey density over which birds responded would be different for each parameter, and (3) different seabird species would respond differently to variation in food availability depending on foraging behavior and ability to adjust time budgets. We tested these predictions using data collected at colonies of common murre Uria aalge and black-legged kittiwake Rissa tridactyla in Cook Inlet, Alaska. (1) Of 22 seabird responses fitted with linear and non-linear functions, 16 responses exhibited significant curvilinear shapes, and Akaike’s information criterion (AIC) analysis indicated that curvilinear functions provided the best-fitting model for 12 of those. (2) However, there were few differences among parameters in their threshold to prey density, presumably because most responses ultimately depend upon a single threshold for prey acquisition at sea. (3) There were similarities and some differences in how species responded to variability in prey density. Both murres and kittiwakes minimized variability (CV < 15%) in their own body condition and growth of chicks in the face of high annual variability (CV = 69%) in local prey density. Whereas kittiwake breeding success (CV = 63%, r2 = 0.89) reflected prey variability, murre breeding success did not (CV = 29%, r2< 0.00). It appears that murres were able to buffer breeding success by reallocating discretionary ‘loafing’ time to foraging effort in response (r2 = 0.64) to declining prey density. Kittiwakes had little or no discretionary time, so fledging success was a more direct function of local prey density. Implications of these results for using ‘seabirds as indicators’ are discussed.
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
The interplay between cooperativity and diversity in model threshold ensembles.
Cervera, Javier; Manzanares, José A; Mafe, Salvador
2014-10-06
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Critical dynamics on a large human Open Connectome network
NASA Astrophysics Data System (ADS)
Ódor, Géza
2016-12-01
Extended numerical simulations of threshold models have been performed on a human brain network with N =836 733 connected nodes available from the Open Connectome Project. While in the case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable threshold models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological or interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Nonuniversal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self-organized criticality.
On-chip optical phase locking of single growth monolithically integrated Slotted Fabry Perot lasers.
Morrissey, P E; Cotter, W; Goulding, D; Kelleher, B; Osborne, S; Yang, H; O'Callaghan, J; Roycroft, B; Corbett, B; Peters, F H
2013-07-15
This work investigates the optical phase locking performance of Slotted Fabry Perot (SFP) lasers and develops an integrated variable phase locked system on chip for the first time to our knowledge using these lasers. Stable phase locking is demonstrated between two SFP lasers coupled on chip via a variable gain waveguide section. The two lasers are biased differently, one just above the threshold current of the device with the other at three times this value. The coupling between the lasers can be controlled using the variable gain section which can act as a variable optical attenuator or amplifier depending on bias. Using this, the width of the stable phase locking region on chip is shown to be variable.
Mirlohi, Susan; Dietrich, Andrea M; Duncan, Susan E
2011-08-01
Humans interact with their environment through the five senses, but little is known about population variability in the ability to assess contaminants. Sensory thresholds and biochemical indicators of metallic flavor perception in humans were evaluated for ferrous (Fe(2+)) iron in drinking water; subjects aged 19-84 years participated. Metallic flavor thresholds for individuals and subpopulations based on age were determined. Oral lipid oxidation and oral pH were measured in saliva as potential biochemical indicators. Individual thresholds were 0.007-14.14 mg/L Fe(2+) and the overall population threshold was 0.17 mg/L Fe(2+) in reagent water. Average thresholds for individuals younger and older than 50 years of age (grouped by the daily recommended nutritional guidelines for iron intake) were significantly different (p = 0.013); the population thresholds for each group were 0.045 mg/L Fe(2+) and 0.498 mg/L Fe(2+), respectively. Many subjects >50 and a few subjects <50 years were insensitive to metallic flavor. There was no correlation between age, oral lipid oxidation, and oral pH. Standardized olfactory assessment found poor sensitivity for Fe(2+) corresponded with conditions of mild, moderate, and total anosmia. The findings demonstrate an age-dependent sensitivity to iron indicating as people age they are less sensitive to metallic perception.
Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry
Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.
2015-01-01
Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
X-ray tomography using the full complex index of refraction.
Nielsen, M S; Lauridsen, T; Thomsen, M; Jensen, T H; Bech, M; Christensen, L B; Olsen, E V; Hviid, M; Feidenhans'l, R; Pfeiffer, F
2012-10-07
We report on x-ray tomography using the full complex index of refraction recorded with a grating-based x-ray phase-contrast setup. Combining simultaneous absorption and phase-contrast information, the distribution of the full complex index of refraction is determined and depicted in a bivariate graph. A simple multivariable threshold segmentation can be applied offering higher accuracy than with a single-variable threshold segmentation as well as new possibilities for the partial volume analysis and edge detection. It is particularly beneficial for low-contrast systems. In this paper, this concept is demonstrated by experimental results.
Schmuziger, Nicolas; Probst, Rudolf; Smurzynski, Jacek
2004-04-01
The purposes of the study were: (1) To evaluate the intrasession test-retest reliability of pure-tone thresholds measured in the 0.5-16 kHz frequency range for a group of otologically healthy subjects using Sennheiser HDA 200 circumaural and Etymotic Research ER-2 insert earphones and (2) to compare the data with existing criteria of significant threshold shifts related to ototoxicity and noise-induced hearing loss. Auditory thresholds in the frequency range from 0.5 to 6 kHz and in the extended high-frequency range from 8 to 16 kHz were measured in one ear of 138 otologically healthy subjects (77 women, 61 men; mean age, 24.4 yr; range, 12-51 yr) using HDA 200 and ER-2 earphones. For each subject, measurements of thresholds were obtained twice for both transducers during the same test session. For analysis, the extended high-frequency range from 8 to 16 kHz was subdivided into 8 to 12.5 and 14 to 16 kHz ranges. Data for each frequency and frequency range were analyzed separately. There were no significant differences in repeatability for the two transducer types for all frequency ranges. The intrasession variability increased slightly, but significantly, as frequency increased with the greatest amount of variability in the 14 to 16 kHz range. Analyzing each individual frequency, variability was increased particularly at 16 kHz. At each individual frequency and for both transducer types, intrasession test-retest repeatability from 0.5 to 6 kHz and 8 to 16 kHz was within 10 dB for >99% and >94% of measurements, respectively. The results indicated a false-positive rate of <3% in reference to the criteria for cochleotoxicity for both transducer types. In reference to the Occupational Safety and Health Administration Standard Threshold Shift criteria for noise-induced hazards, the results showed a minor false-positive rate of <1% for the HDA 200. Repeatability was similar for both transducer types. Intrasession test-retest repeatability from 0.5 to 12.5 kHz at each individual frequency including the frequency range susceptible to noise-induced hearing loss was excellent for both transducers. Repeatability was slightly, but significantly poorer in the frequency range from 14 to 16 kHz compared with the frequency ranges from 0.5 to 6 or 8 to 12.5 kHz. Measurements in the extended high-frequency range from 8 to 14 kHz, but not up to 16 kHz, may be recommended for monitoring purposes.
A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.
Mihalaş, Stefan; Niebur, Ernst
2009-03-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
Probing the Electrode–Neuron Interface With Focused Cochlear Implant Stimulation
Bierer, Julie Arenberg
2010-01-01
Cochlear implants are highly successful neural prostheses for persons with severe or profound hearing loss who gain little benefit from hearing aid amplification. Although implants are capable of providing important spectral and temporal cues for speech perception, performance on speech tests is variable across listeners. Psychophysical measures obtained from individual implant subjects can also be highly variable across implant channels. This review discusses evidence that such variability reflects deviations in the electrode–neuron interface, which refers to an implant channel's ability to effectively stimulate the auditory nerve. It is proposed that focused electrical stimulation is ideally suited to assess channel-to-channel irregularities in the electrode–neuron interface. In implant listeners, it is demonstrated that channels with relatively high thresholds, as measured with the tripolar configuration, exhibit broader psychophysical tuning curves and smaller dynamic ranges than channels with relatively low thresholds. Broader tuning implies that frequency-specific information intended for one population of neurons in the cochlea may activate more distant neurons, and a compressed dynamic range could make it more difficult to resolve intensity-based information, particularly in the presence of competing noise. Degradation of both types of cues would negatively affect speech perception. PMID:20724356
A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors
Mihalaş, Ştefan; Niebur, Ernst
2010-01-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368
Araújo Oliveira Ferreira, Dyna Mara; Costa, Yuri Martins; de Quevedo, Henrique Müller; Bonjardim, Leonardo Rigoldi; Rodrigues Conti, Paulo César
2018-05-15
To assess the modulatory effects of experimental psychological stress on the somatosensory evaluation of myofascial temporomandibular disorder (TMD) patients. A total of 20 women with myofascial TMD and 20 age-matched healthy women were assessed by means of a standardized battery of quantitative sensory testing. Cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical pain threshold (MPT), wind-up ratio (WUR), and pressure pain threshold (PPT) were performed on the facial skin overlying the masseter muscle. The variables were measured in three sessions: before (baseline) and immediately after the Paced Auditory Serial Addition Task (PASAT) (stress) and then after a washout period of 20 to 30 minutes (poststress). Mixed analysis of variance (ANOVA) was applied to the data, and the significance level was set at P = .050. A significant main effect of the experimental session on all thermal tests was found (ANOVA: F > 4.10, P < .017), where detection tests presented an increase in thresholds in the poststress session compared to baseline (CDT, P = .012; WDT, P = .040) and pain thresholds were reduced in the stress (CPT, P < .001; HPT, P = .001) and poststress sessions (CPT, P = .005; HPT, P = .006) compared to baseline. In addition, a significant main effect of the study group on all mechanical tests (MPT, WUR, and PPT) was found (ANOVA: F > 4.65, P < .037), where TMD patients were more sensitive than healthy volunteers. Acute mental stress conditioning can modulate thermal sensitivity of the skin overlying the masseter in myofascial TMD patients and healthy volunteers. Therefore, psychological stress should be considered in order to perform an unbiased somatosensory assessment of TMD patients.
Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin
2013-01-01
Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224
A novel approach to estimation of the time to biomarker threshold: applications to HIV.
Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc
2016-11-01
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Keefer, Patricia; Kidwell, Kelley; Lengyel, Candice; Warrier, Kavita; Wagner, Deborah
2017-01-01
Voluntary medication error reporting is an imperfect resource used to improve the quality of medication administration. It requires judgment by front-line staff to determine how to report enough to identify opportunities to improve patients' safety but not jeopardize that safety by creating a culture of "report fatigue." This study aims to provide information on interpretability of medication error and the variability between the subgroups of caregivers in the hospital setting. Survey participants included nursing, physician (trainee and graduated), patient/families, pharmacist across a large academic health system, including an attached free-standing pediatric hospital. Demographics and survey questions were collected and analyzed using Fischer's exact testing with SAS v9.3. Statistically significant variability existed between the four groups for a majority of the questions. This included all cases designated as administration errors and many, but not all, cases of prescribing events. Commentary provided in the free-text portion of the survey was sub-analyzed and found to be associated with medication allergy reporting and lack of education surrounding report characteristics. There is significant variability in the threshold to report specific medication errors in the hospital setting. More work needs to be done to further improve the education surrounding error reporting in hospitals for all noted subgroups. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Lewis-Evans, Ben; De Waard, Dick; Brookhuis, Karel A
2010-11-01
Subjective impressions of task difficulty, risk, effort, and comfort are key variables of several theories of driver behaviour. A point of difference between many of these theories is not only the importance of these variables, but also whether they are continuously present and monitored or only experienced by individuals at certain critical points in the driving task. Both a threshold relationship and evidence of constant monitoring of risk and task difficulty have been found for speed choice. In light of these conflicting findings this study seeks to examine a different part of the driving task, the choice of time headway. Participants (N=40, aged 19 to 30) drove in a simulator behind a vehicle travelling at 50 km/h at set time headways ranging from 0.5 seconds to 4.0 seconds. After each drive ratings of task difficulty, risk, comfort, and effort were collected. In addition participants were asked to drive at the time headway they preferred. In order to assess familiarity participants also drove on both the left and right hand side of the road and the role of driving experience was also examined. The results show support for a threshold awareness of task difficulty, risk, effort, and comfort in relation to time headway. Participant's ratings of these variables tended to be low or nil at large time headways, but then around the 2.0 second mark began to noticeably increase. Feelings of task difficulty, risk, and effort were also found to be highly correlated with each other. No effect of driving experience or side of the road was found. 2010 Elsevier Ltd. All rights reserved.
Webb, Nicholas P.; Herrick, Jeffrey E.; Duniway, Michael C.
2014-01-01
Accelerated soil erosion occurs when anthropogenic processes modify soil, vegetation or climatic conditions causing erosion rates at a location to exceed their natural variability. Identifying where and when accelerated erosion occurs is a critical first step toward its effective management. Here we explore how erosion assessments structured in the context of ecological sites (a land classification based on soils, landscape setting and ecological potential) and their vegetation states (plant assemblages that may change due to management) can inform systems for reducing accelerated soil erosion in rangelands. We evaluated aeolian horizontal sediment flux and fluvial sediment erosion rates for five ecological sites in southern New Mexico, USA, using monitoring data and rangeland-specific wind and water erosion models. Across the ecological sites, plots in shrub-encroached and shrub-dominated vegetation states were consistently susceptible to aeolian sediment flux and fluvial sediment erosion. Both processes were found to be highly variable for grassland and grass-succulent states across the ecological sites at the plot scale (0.25 Ha). We identify vegetation thresholds that define cover levels below which rapid (exponential) increases in aeolian sediment flux and fluvial sediment erosion occur across the ecological sites and vegetation states. Aeolian sediment flux and fluvial erosion in the study area can be effectively controlled when bare ground cover is 100 cm in length is less than ~35%. Land use and management activities that alter cover levels such that they cross thresholds, and/or drive vegetation state changes, may increase the susceptibility of areas to erosion. Land use impacts that are constrained within the range of natural variability should not result in accelerated soil erosion. Evaluating land condition against the erosion thresholds identified here will enable identification of areas susceptible to accelerated soil erosion and the development of practical management solutions.
Cortical surface-based threshold-free cluster enhancement and cortexwise mediation.
Lett, Tristram A; Waller, Lea; Tost, Heike; Veer, Ilya M; Nazeri, Arash; Erk, Susanne; Brandl, Eva J; Charlet, Katrin; Beck, Anne; Vollstädt-Klein, Sabine; Jorde, Anne; Kiefer, Falk; Heinz, Andreas; Meyer-Lindenberg, Andreas; Chakravarty, M Mallar; Walter, Henrik
2017-06-01
Threshold-free cluster enhancement (TFCE) is a sensitive means to incorporate spatial neighborhood information in neuroimaging studies without using arbitrary thresholds. The majority of methods have applied TFCE to voxelwise data. The need to understand the relationship among multiple variables and imaging modalities has become critical. We propose a new method of applying TFCE to vertexwise statistical images as well as cortexwise (either voxel- or vertexwise) mediation analysis. Here we present TFCE_mediation, a toolbox that can be used for cortexwise multiple regression analysis with TFCE, and additionally cortexwise mediation using TFCE. The toolbox is open source and publicly available (https://github.com/trislett/TFCE_mediation). We validated TFCE_mediation in healthy controls from two independent multimodal neuroimaging samples (N = 199 and N = 183). We found a consistent structure-function relationship between surface area and the first independent component (IC1) of the N-back task, that white matter fractional anisotropy is strongly associated with IC1 N-back, and that our voxel-based results are essentially identical to FSL randomise using TFCE (all P FWE <0.05). Using cortexwise mediation, we showed that the relationship between white matter FA and IC1 N-back is mediated by surface area in the right superior frontal cortex (P FWE < 0.05). We also demonstrated that the same mediation model is present using vertexwise mediation (P FWE < 0.05). In conclusion, cortexwise analysis with TFCE provides an effective analysis of multimodal neuroimaging data. Furthermore, cortexwise mediation analysis may identify or explain a mechanism that underlies an observed relationship among a predictor, intermediary, and dependent variables in which one of these variables is assessed at a whole-brain scale. Hum Brain Mapp 38:2795-2807, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Physical function interfering with pain and symptoms in fibromyalgia patients.
Assumpção, A; Sauer, J F; Mango, P C; Pascual Marques, A
2010-01-01
The aim of this study was to assess the relationship between variables of physical assessment - muscular strength, flexibility and dynamic balance - with pain, pain threshold, and fibromyalgia symptoms (FM). Our sample consists of 55 women, with age ranging from 30 to 55 years (mean of 46.5, (standard deviation, SD=6.6)), mean body mass index (BMI) of 28.7 (3.8) and diagnosed for FM according to the American College of Rheumatology criteria. Pain intensity was measured using a visual analogue scale (VAS) and pain threshold (PT) using Fisher's dolorimeter. FM symptoms were assessed by the Fibromyalgia Impact Questionnaire (FIQ); flexibility by the third finger to floor test (3FF); the muscular strength index (MSI) by the maximum volunteer isometric contraction at flexion and extension of right knee and elbow using a force transducer, dynamic balance by the time to get up and go (TUG) test and the functional reach test (FRT). Data were analysed using Pearson's correlation, as well as simple and multivariate regression tests, with significance level of 5%. PT and FIQ were weakly but significantly correlated with the TUG, MSI and 3FF as well as VAS with the TUG and MSI (p<0.05). VAS, PT and FIQ was not correlated with FRT. Simple regression suggests that, alone, TUG, FR, MSI and 3FF are low predictors of VAS, PT and FIQ. For the VAS, the best predictive model includes TUG and MSI, explaining 12.6% of pain variability. For TP and total symptoms, as obtained by the FIQ, most predictive model includes 3FF and MSI, which respectively respond by 30% and 21% of the variability. Muscular strength, flexibility and balance are associated with pain, pain threshold, and symptoms in FM patients.
Dideriksen, Jakob L.; Negro, Francesco; Enoka, Roger M.
2012-01-01
Motoneurons receive synaptic inputs from tens of thousands of connections that cause membrane potential to fluctuate continuously (synaptic noise), which introduces variability in discharge times of action potentials. We hypothesized that the influence of synaptic noise on force steadiness during voluntary contractions is limited to low muscle forces. The hypothesis was examined with an analytical description of transduction of motor unit spike trains into muscle force, a computational model of motor unit recruitment and rate coding, and experimental analysis of interspike interval variability during steady contractions with the abductor digiti minimi muscle. Simulations varied contraction force, level of synaptic noise, size of motor unit population, recruitment range, twitch contraction times, and level of motor unit short-term synchronization. Consistent with the analytical derivations, simulations and experimental data showed that force variability at target forces above a threshold was primarily due to low-frequency oscillations in neural drive, whereas the influence of synaptic noise was almost completely attenuated by two low-pass filters, one related to convolution of motoneuron spike trains with motor unit twitches (temporal summation) and the other attributable to summation of single motor unit forces (spatial summation). The threshold force above which synaptic noise ceased to influence force steadiness depended on recruitment range, size of motor unit population, and muscle contractile properties. This threshold was low (<10% of maximal force) for typical values of these parameters. Results indicate that motor unit recruitment and muscle properties of a typical muscle are tuned to limit the influence of synaptic noise on force steadiness to low forces and that the inability to produce a constant force during stronger contractions is mainly attributable to the common low-frequency oscillations in motoneuron discharge rates. PMID:22423000
A robotic test of proprioception within the hemiparetic arm post-stroke.
Simo, Lucia; Botzer, Lior; Ghez, Claude; Scheidt, Robert A
2014-04-30
Proprioception plays important roles in planning and control of limb posture and movement. The impact of proprioceptive deficits on motor function post-stroke has been difficult to elucidate due to limitations in current tests of arm proprioception. Common clinical tests only provide ordinal assessment of proprioceptive integrity (eg. intact, impaired or absent). We introduce a standardized, quantitative method for evaluating proprioception within the arm on a continuous, ratio scale. We demonstrate the approach, which is based on signal detection theory of sensory psychophysics, in two tasks used to characterize motor function after stroke. Hemiparetic stroke survivors and neurologically intact participants attempted to detect displacement- or force-perturbations robotically applied to their arm in a two-interval, two-alternative forced-choice test. A logistic psychometric function parameterized detection of limb perturbations. The shape of this function is determined by two parameters: one corresponds to a signal detection threshold and the other to variability of responses about that threshold. These two parameters define a space in which proprioceptive sensation post-stroke can be compared to that of neurologically-intact people. We used an auditory tone discrimination task to control for potential comprehension, attention and memory deficits. All but one stroke survivor demonstrated competence in performing two-alternative discrimination in the auditory training test. For the remaining stroke survivors, those with clinically identified proprioceptive deficits in the hemiparetic arm or hand had higher detection thresholds and exhibited greater response variability than individuals without proprioceptive deficits. We then identified a normative parameter space determined by the threshold and response variability data collected from neurologically intact participants. By plotting displacement detection performance within this normative space, stroke survivors with and without intact proprioception could be discriminated on a continuous scale that was sensitive to small performance variations, e.g. practice effects across days. The proposed method uses robotic perturbations similar to those used in ongoing studies of motor function post-stroke. The approach is sensitive to small changes in the proprioceptive detection of hand motions. We expect this new robotic assessment will empower future studies to characterize how proprioceptive deficits compromise limb posture and movement control in stroke survivors.
Threshold thickness for applying diffusion equation in thin tissue optical imaging
NASA Astrophysics Data System (ADS)
Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong
2014-08-01
We investigated the suitability of the semi-infinite model of the diffusion equation when using diffuse optical imaging (DOI) to image thin tissues with double boundaries. Both diffuse approximation and Monte Carlo methods were applied to simulate light propagation in the thin tissue model with variable optical parameters and tissue thicknesses. A threshold value of the tissue thickness was defined as the minimum thickness in which the semi-infinite model exhibits the same reflected intensity as that from the double-boundary model and was generated as the final result. In contrast to our initial hypothesis that all optical properties would affect the threshold thickness, our results show that only absorption coefficient is the dominant parameter and the others are negligible. The threshold thickness decreases from 1 cm to 4 mm as the absorption coefficient grows from 0.01 mm-1 to 0.2 mm-1. A look-up curve was derived to guide the selection of the appropriate model during the optical diagnosis of thin tissue cancers. These results are useful in guiding the development of the endoscopic DOI for esophageal, cervical and colorectal cancers, among others.
Breast imaging with ultrasound tomography: update on a comparative study with MR
NASA Astrophysics Data System (ADS)
Ranger, Bryan; Littrup, Peter; Duric, Neb; Li, Cuiping; Schmidt, Steven; Rama, Olsi; Bey-Knight, Lisa
2011-03-01
The objective of this study is to present imaging parameters and display thresholds of an ultrasound tomography (UST) prototype in order to demonstrate analogous visualization of overall breast anatomy and lesions relative to magnetic resonance (MR). Thirty-six women were imaged with MR and our UST prototype. The UST scan generated sound speed, attenuation, and reflection images and were subjected to variable thresholds then fused together into a single UST image. Qualitative and quantitative comparisons of MR and UST images were utilized to identify anatomical similarities and mass characteristics. Overall, UST demonstrated the ability to visualize and characterize breast tissues in a manner comparable to MR without the use of IV contrast. For optimal visualization, fused images utilized thresholds of 1.46+/-0.1 km/s for sound speed to represent architectural features of the breast including parenchyma. An arithmetic combination of images using the logical .AND. and .OR. operators, along with thresholds of 1.52+/-0.03 km/s for sound speed and 0.16+/-0.04 dB/cm for attenuation, allowed for mass detection and characterization similar to MR.
Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.
2014-01-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490
Validity of the Talk Test for exercise prescription after myocardial revascularization.
Zanettini, Renzo; Centeleghe, Paola; Franzelli, Cristina; Mori, Ileana; Benna, Stefania; Penati, Chiara; Sorlini, Nadia
2013-04-01
For exercise prescription, rating of perceived exertion is the subjective tool most frequently used in addition to methods based on percentage of peak exercise variables. The aim of this study was the validation of a subjective method widely called the Talk Test (TT) for optimization of training intensity in patients with recent myocardial revascularization. Fifty patients with recent myocardial revascularization (17 by coronary artery bypass grafting and 33 by percutaneous coronary intervention) were enrolled in a cardiac rehabilitation programme. Each patient underwent three repetitions of the TT during three different exercise sessions to evaluate the within-patient and between-operators reliability in assessing the workload (WL) at TT thresholds. These parameters were then compared with the data of a final cardiopulmonary exercise testing, and the WL range between the individual aerobic threshold (AeT) and anaerobic threshold (AnT) was considered as the optimal training zone. The within-patient and between-operators reliability in assessing TT thresholds were satisfactory. No significant differences were found between patients' and physiotherapists' evaluations of WL at different TT thresholds. WL at Last TT+ was between AeT and AnT in 88% of patients and slightly
Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F
2014-06-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.
Sesay, Musa; Robin, Georges; Tauzin-Fin, Patrick; Sacko, Oumar; Gimbert, Edouard; Vignes, Jean-Rodolphe; Liguoro, Dominique; Nouette-Gaulain, Karine
2015-04-01
The autonomic nervous system is influenced by many stimuli including pain. Heart rate variability (HRV) is an indirect marker of the autonomic nervous system. Because of paucity of data, this study sought to determine the optimal thresholds of HRV above which the patients are in pain after minor spinal surgery (MSS). Secondly, we evaluated the correlation between HRV and the numeric rating scale (NRS). Following institutional review board approval, patients who underwent MSS were assessed in the postanesthesia care unit after extubation. A laptop containing the HRV software was connected to the ECG monitor. The low-frequency band (LF: 0.04 to 0.5 Hz) denoted both sympathetic and parasympathetic activities, whereas the high-frequency band (HF: 0.15 to 0.4 Hz) represented parasympathetic activity. LF/HF was the sympathovagal balance. Pain was quantified by the NRS ranging from 0 (no pain) to 10 (worst imaginable pain). Simultaneously, HRV parameters were noted. Optimal thresholds were calculated using receiver operating characteristic curves with NRS>3 as cutoff. The correlation between HRV and NRS was assessed using the Spearman rank test. We included 120 patients (64 men and 56 women), mean age 51±14 years. The optimal pain threshold values were 298 ms for LF and 3.12 for LF/HF, with no significant change in HF. NRS was correlated with LF (r=0.29, P<0.005) and LF/HF (r=0.31, P<0.001) but not with HF (r=0.09, NS). This study suggests that, after MSS, values of LF>298 m and LF/HF>3.1 denote acute pain (NRS>3). These HRV parameters are significantly correlated with NRS.
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1
Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.
2014-01-01
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458
Bardsley, P A; Bentley, S; Hall, H S; Singh, S J; Evans, D H; Morgan, M D
1993-01-01
BACKGROUND--Incremental threshold loading (ITL) is a test of inspiratory muscle performance which is usually performed by breathing through a weighted inspiratory plunger, the load on the inspiratory muscles being increased by externally adding weights to the intake valve. This is not a true threshold device and may be inaccurate. This method was compared with a true threshold device consisting of a solenoid valve which only opens to supply air at a predetermined negative mouth pressure. METHODS--Six naive, normal subjects (three men and three women) aged 22-24 years underwent three tests using each system. The inspiratory loads were increased every minute by equivalent amounts, -10 cm H2O with the solenoid valve and by 50 g with the weighted plunger, until the subjects could not inspire or sustain inspiration for a full minute. Six experienced subjects (four men and two women) aged 23-41 years were subsequently randomised to perform ITL with the solenoid valve, twice with the breathing pattern fixed and twice free. RESULTS--The solenoid valve generated a more accurate mouth pressure response and was less variable at higher loads than the weighted plunger. The work performed (expressed as the pressure-time product) was less with the solenoid valve but was more reproducible. ITL with the solenoid valve was not influenced by controlling the breathing pattern of the subjects. CONCLUSIONS--The solenoid valve has several features that make it superior to the weighted plunger as a device for ITL. It generates a more accurate mouth pressure response which is less variable at higher loads. Increases in load are smoother and quicker to introduce. ITL with the solenoid valve is not influenced by varying breathing patterns and does not require any external regulation. PMID:8511732
Bardsley, P A; Bentley, S; Hall, H S; Singh, S J; Evans, D H; Morgan, M D
1993-04-01
Incremental threshold loading (ITL) is a test of inspiratory muscle performance which is usually performed by breathing through a weighted inspiratory plunger, the load on the inspiratory muscles being increased by externally adding weights to the intake valve. This is not a true threshold device and may be inaccurate. This method was compared with a true threshold device consisting of a solenoid valve which only opens to supply air at a predetermined negative mouth pressure. Six naive, normal subjects (three men and three women) aged 22-24 years underwent three tests using each system. The inspiratory loads were increased every minute by equivalent amounts, -10 cm H2O with the solenoid valve and by 50 g with the weighted plunger, until the subjects could not inspire or sustain inspiration for a full minute. Six experienced subjects (four men and two women) aged 23-41 years were subsequently randomised to perform ITL with the solenoid valve, twice with the breathing pattern fixed and twice free. The solenoid valve generated a more accurate mouth pressure response and was less variable at higher loads than the weighted plunger. The work performed (expressed as the pressure-time product) was less with the solenoid valve but was more reproducible. ITL with the solenoid valve was not influenced by controlling the breathing pattern of the subjects. The solenoid valve has several features that make it superior to the weighted plunger as a device for ITL. It generates a more accurate mouth pressure response which is less variable at higher loads. Increases in load are smoother and quicker to introduce. ITL with the solenoid valve is not influenced by varying breathing patterns and does not require any external regulation.
John, Dinesh; Morton, Alvin; Arguello, Diego; Lyden, Kate; Bassett, David
2018-04-15
(1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2-4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output.
John, Dinesh; Arguello, Diego; Lyden, Kate; Bassett, David
2018-01-01
(1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2–4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output. PMID:29662048
Do Atmospheric Rivers explain the extreme precipitation events over East Asia?
NASA Astrophysics Data System (ADS)
Dairaku, K.; Nayak, S.
2017-12-01
Extreme precipitation events are now of serious concern due to their damaging societal impacts over last few decades. Thus, climate indices are widely used to identify and quantify variability and changes in particular aspects of the climate system, especially when considering extremes. In our study, we focus on few climate indices of annual precipitation extremes for the period 1979-2013 over East Asia to discuss some straightforward information and interpretation of certain aspects of extreme precipitation events that occur over the region. To do so, we first discuss different percentiles of precipitation and maximum length of wet spell with different thresholds from a regional climate model (NRAMS) simulation at 20km. Results indicate that the 99 percentile of precipitation events correspond to about 80mm/d over few regions of East Asia during 1979-2013 and maximum length of wet spell with minimum 20mm precipitation corresponds to about 10days (Figure 1). We then linked the extreme precipitation events with the intense moisture transport events associated with atmospheric rivers (ARs). The ARs are identified by computing the vertically integrated horizontal water vapor transport (IVT) between 1000hpa and 300hpa with IVT ≥ 250 kg/m/s and 2000 km minimum long. With this threshold and condition (set by previous research), our results indicate that some extreme propitiation events are associated with some ARs over East Asia, while some events are not associated with any ARs. Similarly, some ARs are associated with some extreme precipitation events, while some ARs are not associated with any events. Since the ARs are sensitive to the threshold and condition depending on region, so we will analyze the characteristics of ARs (frequency, duration, and annual variability) with different thresholds and discuss their relationship with extreme precipitation events over East Asia.
Smits, Cas; Merkus, Paul; Festen, Joost M.; Goverts, S. Theo
2017-01-01
Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI. PMID:29205095
Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.
Louthain, James A; Schmidt, Jason D
2010-04-26
Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.
Excitation-based and informational masking of a tonal signal in a four-tone masker.
Leibold, Lori J; Hitchens, Jack J; Buss, Emily; Neff, Donna L
2010-04-01
This study examined contributions of peripheral excitation and informational masking to the variability in masking effectiveness observed across samples of multi-tonal maskers. Detection thresholds were measured for a 1000-Hz signal presented simultaneously with each of 25, four-tone masker samples. Using a two-interval, forced-choice adaptive task, thresholds were measured with each sample fixed throughout trial blocks for ten listeners. Average thresholds differed by as much as 26 dB across samples. An excitation-based model of partial loudness [Moore, B. C. J. et al. (1997). J. Audio Eng. Soc. 45, 224-237] was used to predict thresholds. These predictions accounted for a significant portion of variance in the data of several listeners, but no relation between the model and data was observed for many listeners. Moreover, substantial individual differences, on the order of 41 dB, were observed for some maskers. The largest individual differences were found for maskers predicted to produce minimal excitation-based masking. In subsequent conditions, one of five maskers was randomly presented in each interval. The difference in performance for samples with low versus high predicted thresholds was reduced in random compared to fixed conditions. These findings are consistent with a trading relation whereby informational masking is largest for conditions in which excitation-based masking is smallest.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua D.; Hartse, Hans
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.
Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C
2017-06-01
An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.
Physical characterization of intertidal estuarine plant habitats over time may reveal distribution-limiting thresholds. Temperature data from loggers embedded in sediment in transects crossing Zostera marina and Z. japonica habitats in lower Yaquina Bay, Oregon display signific...
Amarillo, Yimy; Mato, Germán; Nadal, Marcela S
2015-01-01
Thalamocortical neurons are involved in the generation and maintenance of brain rhythms associated with global functional states. The repetitive burst firing of TC neurons at delta frequencies (1-4 Hz) has been linked to the oscillations recorded during deep sleep and during episodes of absence seizures. To get insight into the biophysical properties that are the basis for intrinsic delta oscillations in these neurons, we performed a bifurcation analysis of a minimal conductance-based thalamocortical neuron model including only the IT channel and the sodium and potassium leak channels. This analysis unveils the dynamics of repetitive burst firing of TC neurons, and describes how the interplay between the amplifying variable mT and the recovering variable hT of the calcium channel IT is sufficient to generate low threshold oscillations in the delta band. We also explored the role of the hyperpolarization activated cationic current Ih in this reduced model and determine that, albeit not required, Ih amplifies and stabilizes the oscillation.
Pulp Sensitivity: Influence of Sex, Psychosocial Variables, COMT Gene, and Chronic Facial Pain.
Mladenovic, Irena; Krunic, Jelena; Supic, Gordana; Kozomara, Ruzica; Bokonjic, Dejan; Stojanovic, Nikola; Magic, Zvonko
2018-05-01
The purpose of this study was to evaluate the associations of variability in pulp sensitivity with sex, psychosocial variables, the gene that encodes for the enzyme catechol-O-methyltransferase (COMT), and chronic painful conditions (temporomandibular disorders [TMDs]). The study was composed of 97 subjects (68 women and 29 men aged 20-44 years). The electric (electric pulp tester) and cold (refrigerant spray) stimuli were performed on mandibular lateral incisors. The results were expressed as pain threshold values for electric pulp stimulation (0-80 units) and as pain intensity scores (visual numeric scale from 0-10) for cold stimulation. The Research Diagnostic Criteria for TMD were used to assess TMD, depression, and somatization. DNA extracted from peripheral blood was genotyped for 3 COMT polymorphisms (rs4680, rs6269, and rs165774) using the real-time TaqMan method. Multivariate linear regression was used to investigate the joint effect of the predictor variables (clinical and genetic) on pulp sensitivity (dependent variables). Threshold responses to electric stimuli were related to female sex (P < .01) and the homozygous GG genotype for the rs165774 polymorphism (P < .05). Pain intensity to cold stimuli was higher in TMD patients (P < .01) and tended to be higher in women. Multivariate linear regression identified sex and the rs165774 COMT polymorphism as the determinants of electric pain sensitivity, whereas TMD accounts for the variability in the cold response. Our findings indicate that sex/a COMT gene variant and TMD as a chronic painful condition may contribute to individual variation in electric and cold pulp sensitivity, respectively. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Do we need a threshold conception of competence?
den Hartogh, Govert
2016-03-01
On the standard view we assess a person's competence by considering her relevant abilities without reference to the actual decision she is about to make. If she is deemed to satisfy certain threshold conditions of competence, it is still an open question whether her decision could ever be overruled on account of its harmful consequences for her ('hard paternalism'). In practice, however, one normally uses a variable, risk dependent conception of competence, which really means that in considering whether or not to respect a person's decision-making authority we weigh her decision on several relevant dimensions at the same time: its harmful consequences, its importance in terms of the person's own relevant values, the infringement of her autonomy involved in overruling it, and her decision-making abilities. I argue that we should openly recognize the multi-dimensional nature of this judgment. This implies rejecting both the threshold conception of competence and the categorical distinction between hard and soft paternalism.
Climate and floods still govern California levee breaks
Florsheim, J.L.; Dettinger, M.D.
2007-01-01
Even in heavily engineered river systems, climate still governs flood variability and thus still drives many levee breaks and geomorphic changes. We assemble a 155-year record of levee breaks for a major California river system to find that breaks occurred in 25% of years during the 20th Century. A relation between levee breaks and river discharge is present that sets a discharge threshold above which most levee breaks occurred. That threshold corresponds to small floods with recurrence intervals of ???2-3 years. Statistical analysis illustrates that levee breaks and peak discharges cycle (broadly) on a 12-15 year time scale, in time with warm-wet storm patterns in California, but more slowly or more quickly than ENSO and PDO climate phenomena, respectively. Notably, these variations and thresholds persist through the 20th Century, suggesting that historical flood-control effects have not reduced the occurrence or frequency of levee breaks. Copyright 2007 by the American Geophysical Union.
The effect of variably tinted spectacle lenses on visual performance in cataract subjects.
Naidu, Srilata; Lee, Jason E; Holopigian, Karen; Seiple, William H; Greenstein, Vivienne C; Stenson, Susan M
2003-01-01
A body of clinical and laboratory evidence suggests that tinted spectacle lenses may have an effect on visual performance. The aim of this study was to quantify the effects of spectacle lens tint on the visual performance of 25 subjects with cataracts. Cataracts were scored based on best-corrected acuity and by comparison with the Lens Opacity Classification System (LOCS III) plates. Visual performance was assessed by measuring contrast sensitivity with and without glare (Morphonome software version 4.0). The effect of gray, brown, yellow, green and purple tinting was evaluated. All subjects demonstrated an increase in contrast thresholds under glare conditions regardless of lens tint. However, brown and yellow lens tints resulted in the least amount of contrast threshold increase. Gray lens tint resulted in the largest contrast threshold increase. Individuals with lenticular changes may benefit from brown or yellow spectacle lenses under glare conditions.
An Algorithm to Automate Yeast Segmentation and Tracking
Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.
2013-01-01
Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484
SU-F-J-32: Do We Need KV Imaging During CBCT Based Patient Set-Up for Lung Radiation Therapy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Zhou, J; Prado, K
Purpose: To evaluate the role of 2D kilovoltage (kV) imaging to complement cone beam CT (CBCT) imaging in a shift threshold based image guided radiation therapy (IGRT) strategy for conventional lung radiotherapy. Methods: A retrospective study was conducted by analyzing IGRT couch shift trends for 15 patients that received lung radiation therapy to evaluate the benefit of performing orthogonal kV imaging prior to CBCT imaging. Herein, a shift threshold based IGRT protocol was applied, which would mandate additional CBCT verification if the applied patient shifts exceeded 3 mm to avoid intraobserver variability in CBCT registration and to confirm table shifts.more » For each patient, two IGRT strategies: kV + CBCT and CBCT alone, were compared and the recorded patient shifts were categorized into whether additional CBCT acquisition would have been mandated or not. The effectiveness of either strategy was gauged by the likelihood of needing additional CBCT imaging for accurate patient set-up. Results: The use of CBCT alone was 6 times more likely to require an additional CBCT than KV+CBCT, for a 3 mm shift threshold (88% vs 14%). The likelihood of additional CBCT verification generally increased with lower shift thresholds, and was significantly lower when kV+CBCT was used (7% with 5 mm shift threshold, 36% with 2 mm threshold), than with CBCT alone (61% with 5 mm shift threshold, 97% with 2 mm threshold). With CBCT alone, treatment time increased by 2.2 min and dose increased by 1.9 cGy per fraction on average due to additional CBCT with a 3mm shift threshold. Conclusion: The benefit of kV imaging to screen for gross misalignments led to more accurate CBCT based patient localization compared with using CBCT alone. The subsequently reduced need for additional CBCT verification will minimize treatment time and result in less overall patient imaging dose.« less
Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-02-01
To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
Estimating sensitivity and specificity for technology assessment based on observer studies.
Nishikawa, Robert M; Pesce, Lorenzo L
2013-07-01
The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
McGee, John Christopher; Wilson, Eric; Barela, Haley; Blum, Sharon
2017-03-01
Air Liaison Officer Aptitude Assessment (AAA) attrition is often associated with a lack of candidate physical preparation. The Functional Movement Screen, Tactical Fitness Assessment, and fitness metrics were collected (n = 29 candidates) to determine what physical factors could predict a candidate s success in completing AAA. Between-group comparisons were made between candidates completing AAA versus those who did not (p < 0.05). Upper 50% thresholds were established for all variables with R 2 < 0.8 and the data were converted to a binary form (0 = did not attain threshold, 1 = attained threshold). Odds-ratios, pre/post-test probabilities and positive likelihood ratios were computed and logistic regression applied to explain model variance. The following variables provided the most predictive value for AAA completion: Pull-ups (p = 0.01), Sit-ups (p = 0.002), Relative Powerball Toss (p = 0.017), and Pull-ups × Sit-ups interaction (p = 0.016). Minimum recommended guidelines for AAA screening are Pull-ups (10 maximum), Sit-ups (76/2 minutes), and a Relative Powerball Toss of 0.6980 ft × lb/BW. Associated benefits could be higher graduation rates, and a cost-savings associated from temporary duty and possible injury care for nonselected candidates. Recommended guidelines should be validated in future class cycles. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Hürlimann, Marcel; Abancó, Clàudia; Moya, Jose; Berenguer, Marc
2015-04-01
Empirical rainfall thresholds are a widespread technique in debris-flow hazard assessment and can be established by statistical analysis of historic data. Typically, data from one or several rain gauges located nearby the affected catchment is used to define the triggering conditions. However, this procedure has been demonstrated not to be accurate enough due to the spatial variability of convective rainstorms. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, 28 torrential flows (debris flows and debris floods) have occurred and rainfall data of 25 of them are available with a 5-minutes frequency of recording ("event rainfalls"). Other 142 rainfalls that did not trigger events ("no event rainfalls) were also collected and analysed. The goal of this work was threefold: a) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential events and others that did not; b) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment; c) estimate the uncertainty derived from the use of rain gauges located outside the catchment based on the spatial correlation depicted by radar rainfall maps. The results of the statistical analysis showed that the parameters that more distinguish between the two populations of rainfalls are the rainfall intensities, the mean rainfall and the total precipitation. On the other side, the storm duration and the antecedent rainfall are not significantly different between "event rainfalls" and "no event rainfalls". Four different ID rainfall thresholds were derived based on the dataset of the first 5 years and tested using the 2014 dataset. The results of the test indicated that the threshold corresponding to the 90% percentile showed the best performance. Weather radar data was used to analyse the spatial variability of the triggering rainfalls. The analysis indicates that rain gauges outside the catchment may be considered useful or not to describe the rainfall depending on the type of rainfall. For widespread rainfalls, further rain gauges can give a reliable measurement, because the spatial correlation decreases slowly with the distance between the rain gauge and the debris-flow initiation area. Contrarily, local storm cells show higher space-time variability and, therefore, representative rainfall measurements are obtained only by the closest rain gauges. In conclusion, the definition of rainfall thresholds is a delicate task. When the rainfall records are coming from gauges that are outside the catchment under consideration, the data should be carefully analysed and crosschecked with radar data (especially for small convective cells).
Differential effects of two virtual reality interventions: distraction versus pain control.
Loreto-Quijada, Desirée; Gutiérrez-Maldonado, José; Nieto, Rubén; Gutiérrez-Martínez, Olga; Ferrer-García, Marta; Saldaña, Carmina; Fusté-Escolano, Adela; Liutsko, Liudmila
2014-06-01
There is evidence that virtual reality (VR) pain distraction is effective at improving pain-related outcomes. However, more research is needed to investigate VR environments with other pain-related goals. The main aim of this study was to compare the differential effects of two VR environments on a set of pain-related and cognitive variables during a cold pressor experiment. One of these environments aimed to distract attention away from pain (VRD), whereas the other was designed to enhance pain control (VRC). Participants were 77 psychology students, who were randomly assigned to one of the following three conditions during the cold pressor experiment: (a) VRD, (b) VRC, or (c) Non-VR (control condition). Data were collected regarding both pain-related variables (intensity, tolerance, threshold, time perception, and pain sensitivity range) and cognitive variables (self-efficacy and catastrophizing). Results showed that in comparison with the control condition, the VRC intervention significantly increased pain tolerance, the pain sensitivity range, and the degree of time underestimation. It also increased self-efficacy in tolerating pain and led to a reduction in reported helplessness. The VRD intervention significantly increased the pain threshold and pain tolerance in comparison with the control condition, but it did not affect any of the cognitive variables. Overall, the intervention designed to enhance control seems to have a greater effect on the cognitive variables assessed. Although these results need to be replicated in further studies, the findings suggest that the VRC intervention has considerable potential in terms of increasing self-efficacy and modifying the negative thoughts that commonly accompany pain problems.
Soil texture and climatc conditions for biocrust growth limitation: a meta analysis
NASA Astrophysics Data System (ADS)
Fischer, Thomas; Subbotina, Mariia
2015-04-01
Along with afforestation, attempts have been made to combat desertification by managing soil crusts, and is has been reported that recovery rates of biocrusts are dependent on many factors, including the type, severity, and extent of disturbance; structure of the vascular plant community; conditions of adjoining substrates; availability of inoculation material; and climate during and after disturbance (Belnap & Eldridge 2001). Because biological soil crusts are known to be more stable on and to prefer fine substrates (Belnap 2001), the question arises as to how successful crust management practices can be applied to coarser soil. In previous studies we observed similar crust biomasses on finer soils under arid and on coarser soils under temperate conditions. We hypothesized that the higher water holding capacity of finer substrates would favor crust development, and that the amount of silt and clay in the substrate that is required for enhanced crust development would vary with changes in climatic conditions. In a global meta study, climatic and soil texture threshold values promoting BSC growth were derived. While examining literature sources, it became evident that the amount of studies to be incorporated into this meta analysis was reversely related to the amount of common environmental parameters they share. We selected annual mean precipitaion, mean temperature and the amount of silt and clay as driving variables for crust growth. Response variable was the "relative crust biomass", which was computed per literature source as the ratio between each individual crust biomass value of the given study to the study maximum value reported. We distinguished lichen, green algal, cyanobacterial and moss crusts. To quantify threshold conditions at which crust biomass responded to differences in texture and climate, we (I) determined correlations between bioclimatic variables, (II) calculated linear models to determine the effect of typical climatic variables with soil clay content and with study site as a random effect. (III) Threshold values of texture and climatc effects were identified using a regression tree. Three mean annual temperature classes for texture dependent BSC growth limitation were identified: (1) <9 °C with a threshold value of 25% silt and clay (limited growth on coarser soils), (2) 9-19 °C, where texture did have no influence on relative crust biomass, and (3) >19 °C at soils with <4 or >17% silt and clay. Because biocrust development is limited under certain climatic and soil texture conditions, it is suggested to consider soil texture for biocrust rehabilitation purposes and in biogeochemical modeling of cryptogamic ground covers. References Belnap, J. & Eldridge, D. 2001. Disturbance and Recovery of Biological Soil Crusts. In: Belnap, J. & Lange, O. (eds.) Biological Soil Crusts: Structure, Function, and Management, Springer, Berlin. Belnap, J. 2001. Biological Soil Crusts and Wind Erosion. In: Belnap, J. & Lange, O. (eds.) Fischer, T., Subbotina, M. 2014. Climatic and soil texture threshold values for cryptogamic cover development: a meta analysis. Biologia 69/11:1520-1530,
Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2014-02-01
To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.
Spatial and temporal variability in forest-atmosphere CO2 exchange
D.Y. Hollinger; J. Aber; B. Dail; E.A. Davidson; S.M. Goltz; et al.
2004-01-01
Seven years of carbon dioxide flux measurements indicate that a ∼ 90-year-old spruce dominated forest in Maine, USA, has been sequestering 174±46 gCm-2 yr-1 (mean±1 standard deviation, nocturnal friction velocity (u*) threshold >0.25ms-1...
Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia
2018-07-14
In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Barbieri, Marco; Drummond, Michael; Willke, Richard; Chancellor, Jeremy; Jolain, Bruno; Towse, Adrian
2005-01-01
It has long been suggested that, whereas the results of clinical studies of pharmaceuticals are generalizable from one jurisdiction to another, the results of economic evaluations are location dependent. There has been, however, little study of the causes of variation, whether differences in study results among countries are systematic, or whether they are important for decision making. A literature search was conducted to identify economic evaluations of pharmaceuticals conducted in two or more European countries. The studies identified were then classified by methodological type and analyzed to assess their level of variability and to identify the main causes of variation. Assessments were also made of the extent to which differences in study results among countries were systematic and whether they would lead to a different decision, assuming a range of values of the threshold willingness-to-pay for a life-year or quality-adjusted life-year (QALY). In total 46 intercountry drug comparisons were identified, 29 in multicountry studies and 17 in comparable single country studies that were considered to be sufficiently similar in terms of methodology. The type of study (i.e., trial-based or modeling study) had some impact on variability, but the most important factor was the extent of variation across countries in effectiveness, resource use or unit costs, allowed by the researcher's chosen methodology. There were few systematic differences in study results among countries, so a decision maker in country B, on seeing a recent economic evaluation of a new drug in country A, would have little basis on which to predict whether the drug, if evaluated, would be more or less cost-effective in his or her country. Given the extent of variation in cost-effectiveness estimates among countries, the importance of this for decision making depends on decision makers' thresholds in willingness-to-pay for a QALY or life-year. If a cost-effectiveness threshold (i.e., willingness-to-pay) for a life-year or QALY of dollar 50,000 were assumed, the same conclusion regarding cost-effectiveness would be reached in most cases. This review shows that cost-effectiveness results for pharmaceuticals vary from country to country in Western Europe and that these variations are not systematic. In addition, constraints imposed by analysts may reduce apparent variability in the estimates. The lessons for inferring generalizability are not straightforward, although the implications of variation for decision making depend critically on the cost-effectiveness thresholds applying in Western Europe.
Le Prell, C. G.; Dell, S.; Hensley, B.; Hall, J. W.; Campbell, K. C. M.; Antonelli, P. J.; Green, G. E.; Miller, J. M.; Guire, K.
2012-01-01
Objectives One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is availability of an established clinical paradigm with real world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal hearing human subjects. Design Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93–95 (n=10), 98–100 (n=11), or 100–102 (n=12) dBA in-ear exposure level for a period of four hours. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured prior to and after music exposure. Post-music tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and one week later. Results Changes in thresholds after the lowest level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a “notch” configuration, with the largest changes observed at 4 kHz (mean=6.3±3.9dB; range=0–13 dB). Recovery was largely complete within the first 4 hours post-exposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1-week post-exposure. Conclusions These data provide insight into the variability of TTS induced by music player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function following digital music player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be taken to fully inform potential subjects in future TTS studies, including protective agent evaluations, that some noise exposures have resulted in neural degeneration in animal models, even when both audiometric thresholds and DPOAE levels returned to pre-exposure values. PMID:22885407
NASA Astrophysics Data System (ADS)
Chefranov, Sergey; Chefranov, Alexander
2016-04-01
Linear hydrodynamic stability theory for the Hagen-Poiseuille (HP) flow yields a conclusion of infinitely large threshold Reynolds number, Re, value. This contradiction to the observation data is bypassed using assumption of the HP flow instability having hard type and possible for sufficiently high-amplitude disturbances. HP flow disturbance evolution is considered by nonlinear hydrodynamic stability theory. Similar is the case of the plane Couette (PC) flow. For the plane Poiseuille (PP) flow, linear theory just quantitatively does not agree with experimental data defining the threshold Reynolds number Re= 5772 ( S. A. Orszag, 1971), more than five-fold exceeding however the value observed, Re=1080 (S. J. Davies, C. M. White, 1928). In the present work, we show that the linear stability theory conclusions for the HP and PC on stability for any Reynolds number and evidently too high threshold Reynolds number estimate for the PP flow are related with the traditional use of the disturbance representation assuming the possibility of separation of the longitudinal (along the flow direction) variable from the other spatial variables. We show that if to refuse from this traditional form, conclusions on the linear instability for the HP and PC flows may be obtained for finite Reynolds numbers (for the HP flow, for Re>704, and for the PC flow, for Re>139). Also, we fit the linear stability theory conclusion on the PP flow to the experimental data by getting an estimate of the minimal threshold Reynolds number as Re=1040. We also get agreement of the minimal threshold Reynolds number estimate for PC with the experimental data of S. Bottin, et.al., 1997, where the laminar PC flow stability threshold is Re = 150. Rogue waves excitation mechanism in oppositely directed currents due to the PC flow linear instability is discussed. Results of the new linear hydrodynamic stability theory for the HP, PP, and PC flows are published in the following papers: 1. S.G. Chefranov, A.G. Chefranov, JETP, v.119, No.2, 331, 2014 2. S.G. Chefranov, A.G. Chefranov, Doklady Physics, vol.60, No.7, 327-332, 2015 3. S.G. Chefranov, A. G. Chefranov, arXiv: 1509.08910v1 [physics.flu-dyn] 29 Sep 2015 (accepted to JETP)
Abejón, David; Rueda, Pablo; del Saz, Javier; Arango, Sara; Monzón, Eva; Gilsanz, Fernando
2015-04-01
Neurostimulation is the process and technology derived from the application of electricity with different parameters to activate or inhibit nerve pathways. Pulse width (Pw) is the duration of each electrical impulse and, along with amplitude (I), determines the total energy charge of the stimulation. The aim of the study was to test Pw values to find the most adequate pulse widths in rechargeable systems to obtain the largest coverage of the painful area, the most comfortable paresthesia, and the greatest patient satisfaction. A study of the parameters was performed, varying Pw while maintaining a fixed frequency at 50 Hz. Data on perception threshold (Tp ), discomfort threshold (Td ), and therapeutic threshold (Tt ) were recorded, applying 14 increasing Pw values ranging from 50 µsec to 1000 µsec. Lastly, the behavior of the therapeutic range (TR), the coverage of the painful area, the subjective patient perception of paresthesia, and the degree of patient satisfaction were assessed. The findings after analyzing the different thresholds were as follows: When varying the Pw, the differences obtained at each threshold (Tp , Tt , and Td ) were statistically significant (p < 0.05). The differences among the resulting Tp values and among the resulting Tt values were statistically significant when varying Pw from 50 up to 600 µsec (p < 0.05). For Pw levels 600 µsec and up, no differences were observed in these thresholds. In the case of Td , significant differences existed as Pw increased from 50 to 700 µsec (p ≤ 0.05). The coverage increased in a statistically significant way (p < 0.05) from Pw values of 50 µsec to 300 µsec. Good or very good subjective perception was shown at about Pw 300 µsec. The patient paresthesia coverage was introduced as an extra variable in the chronaxie-rheobase curve, allowing the adjustment of Pw values for optimal programming. The coverage of the patient against the current chronaxie-rheobase formula will be represented on three axes; an extra axis (z) will appear, multiplying each combination of Pw value and amplitude by the percentage of coverage corresponding to those values. Using this new comparison of chronaxie-rheobase curve vs. coverage, maximum Pw values will be obtained different from those obtained by classic methods. © 2014 International Neuromodulation Society.
NASA Astrophysics Data System (ADS)
Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.
2014-07-01
Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).
Moring, J. Bruce
2009-01-01
In 2001, the U.S. Geological Survey National Water Quality Assessment Program began a series of studies in the contiguous United States to examine the effects of urbanization on the chemical, physical, and biological characteristics of streams. Small streams in the Texas Blackland Prairie level III ecoregion in and near the Dallas-Fort Worth metropolitan area were the focus of one of the studies. The principal objectives of the study, based on data collected in 2003-04 from 28 subbasins of the Trinity River Basin, were to (1) define a gradient of urbanization for small Blackland Prairie streams in the Trinity River Basin on the basis of a range of urban intensity indexes (UIIs) calculated using land-use/land-cover, infrastructure, and socioeconomic characteristics; (2) assess the relation between this gradient of urbanization and the chemical, physical, and biological characteristics of these streams; and (3) evaluate the type of relation (that is, linear or nonlinear, and whether there was a threshold response) of the chemical, physical, and biological characteristics of these streams to the gradient of urbanization. Of 94 water-chemistry variables and one measure of potential toxicity from a bioassay, the concentrations of two pesticides (diazinon and sima-zine) and one measure of potential toxicity (P450RGS assay) from compounds sequestered in semipermeable membrane devices were significantly positively correlated with the UII. No threshold responses to the UII for diazinon and simazine concentrations were observed over the entire range of the UII scores. The linear correlation for diazinon with the UII was significant, but the linear correlation for simazine with the UII was not. No statistically significant relations between the UII and concentrations of suspended sediment, total nitrogen, total phosphorous, or any major ions were indicated. Eleven of 59 physical variables from streamflow were significantly correlated with the UII. Temperature was not significantly correlated with the UII, and none of the physical habitat measurements were significantly correlated with the UII. Seven physical variables categorized as streamflow flashiness metrics were significantly positively correlated with the UII, two of which showed a linear but not a threshold response to the UII. Four flow-duration metrics were significantly negatively correlated with the UII, of which two showed a linear response to the UII, one showed a threshold response, and one showed neither. None of the fish metrics were significantly correlated with the UII in the Blackland Prairie streams. Two qualitative multi-habitat benthic macroinvertebrate metrics, predator richness and percentage filterer-collector richness, were significantly correlated with the UII; predator richness was negatively correlated with the UII, and percentage filterer-collector richness was positively correlated with the UII. No threshold response to the UII was observed for either metric, but both showed a significant linear response to the UII. Three richest targeted habitat (RTH) benthic macroinvertebrate metrics, Margalef's richness, predator richness, and omnivore richness were significantly negatively correlated with the UII. Margalef's richness was the only RTH metric that indicated a threshold response to the UII. The majority of unique taxa collected in the periphytic algae samples were diatoms. Six RTH periphytic algae metrics were correlated with the UII and five of the six showed no notable threshold response to the UII; but all five showed significant linear responses to the UII. Only the metric OT_VL_DP, which indicates the presence of algae that are tolerant of low dissolved oxygen conditions, showed a threshold response to the UII. Six depositional target habitat periphytic algae metrics were correlated with the UII, five of which showed no threshold response to the UII; three of the five showed significant linear responses to the UII, one showed a borderline significant
The validity of activity monitors for measuring sleep in elite athletes.
Sargent, Charli; Lastella, Michele; Halson, Shona L; Roach, Gregory D
2016-10-01
There is a growing interest in monitoring the sleep of elite athletes. Polysomnography is considered the gold standard for measuring sleep, however this technique is impractical if the aim is to collect data simultaneously with multiple athletes over consecutive nights. Activity monitors may be a suitable alternative for monitoring sleep, but these devices have not been validated against polysomnography in a population of elite athletes. Participants (n=16) were endurance-trained cyclists participating in a 6-week training camp. A total of 122 nights of sleep were recorded with polysomnography and activity monitors simultaneously. Agreement, sensitivity, and specificity were calculated from epoch-for-epoch comparisons of polysomnography and activity monitor data. Sleep variables derived from polysomnography and activity monitors were compared using paired t-tests. Activity monitor data were analysed using low, medium, and high sleep-wake thresholds. Epoch-for-epoch comparisons showed good agreement between activity monitors and polysomnography for each sleep-wake threshold (81-90%). Activity monitors were sensitive to sleep (81-92%), but specificity differed depending on the threshold applied (67-82%). Activity monitors underestimated sleep duration (18-90min) and overestimated wake duration (4-77min) depending on the threshold applied. Applying the correct sleep-wake threshold is important when using activity monitors to measure the sleep of elite athletes. For example, the default sleep-wake threshold (>40 activity counts=wake) underestimates sleep duration by ∼50min and overestimates wake duration by ∼40min. In contrast, sleep-wake thresholds that have a high sensitivity to sleep (>80 activity counts=wake) yield the best combination of agreement, sensitivity, and specificity. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Bayes, Adam; Graham, Rebecca K; Parker, Gordon B; McCraw, Stacey
2018-06-01
Recent research indicates that borderline personality disorder (BPD) can be diagnostically differentiated from the bipolar disorders. However, no studies have attempted to differentiate participants with sub-threshold bipolar disorder or SubT BP (where hypomanic episodes last less than 4 days) from those with a BPD. In this study, participants were assigned a SubT BP, bipolar II disorder (BP II) or BPD diagnosis based on clinical assessment and DSM-IV criteria. Participants completed self-report measures and undertook a clinical interview which collected socio-demographic information, a mood history, family history, developmental history, treatment information, and assessed cognitive, emotional and behavioural functioning. Both bipolar groups, whether SubT BP or BP II, differed to the BPD group on a number of key variables (i.e. developmental trauma, depression correlates, borderline personality scores, self-harm and suicide attempts), and compared to each other, returned similar scores on nearly all key variables. Borderline risk scores resulted in comparable classification rates of 0.74 (for BPD vs BP II) and 0.82 (for BPD vs sub-threshold BP II). Study findings indicate that both SubT BP and BP II disorder can be differentiated from BPD on a set of refined clinical variables with comparable accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Second ventilatory threshold from heart-rate variability: valid when the upper body is involved?
Mourot, Laurent; Fabre, Nicolas; Savoldelli, Aldo; Schena, Federico
2014-07-01
To determine the most accurate method based on spectral analysis of heart-rate variability (SA-HRV) during an incremental and continuous maximal test involving the upper body, the authors tested 4 different methods to obtain the heart rate (HR) at the second ventilatory threshold (VT(2)). Sixteen ski mountaineers (mean ± SD; age 25 ± 3 y, height 177 ± 8 cm, mass 69 ± 10 kg) performed a roller-ski test on a treadmill. Respiratory variables and HR were continuously recorded, and the 4 SA-HRV methods were compared with the gas-exchange method through Bland and Altman analyses. The best method was the one based on a time-varying spectral analysis with high frequency ranging from 0.15 Hz to a cutoff point relative to the individual's respiratory sinus arrhythmia. The HR values were significantly correlated (r(2) = .903), with a mean HR difference with the respiratory method of 0.1 ± 3.0 beats/min and low limits of agreements (around -6 /+6 beats/min). The 3 other methods led to larger errors and lower agreements (up to 5 beats/min and around -23/+20 beats/min). It is possible to accurately determine VT(2) with an HR monitor during an incremental test involving the upper body if the appropriate HRV method is used.
Nutrients are a leading cause of impairments in the United States, and as a result tools are needed to identify drivers of nutrients and response variables (such as chlorophyll a), nutrient sources, and identify causes of exceedances of water quality thresholds. This presentatio...
Zhu, Mingping; Chen, Aiqing
2017-01-01
This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580
NASA Astrophysics Data System (ADS)
York, A.; Frey, K. E.; Das, S. B.
2017-12-01
The seasonal and interannual variability in outlet glacier terminus position is an important indicator of overall glacier health and the net effects of ice-ocean-atmosphere interactions. However, challenges arise in determining a primary driver of glacier change, as the magnitude of retreat observed at the terminus is controlled not only by atmospheric and oceanic temperatures, but also physical constraints unique to each glacier (e.g., ice mélange buttressing and underlying bedrock/bathymetry) which often lead to a non-linear response to climate. For example, previous studies have shown varying magnitudes of terminus retreat over the last 40 years at glaciers in West Greenland, despite exposure to similar atmospheric forcings. Satellite imagery can provide the necessary spatially- and temporally-extensive resource for monitoring glacier terminus behavior. Here, we constructed a time series of 18 glacier termini digitized from over 1200 all-season Landsat images between 1985 and 2015 within Disko and Uummannaq Bays, West Greenland. We calculated change points in the annual maximum terminus retreat of the glaciers using a bootstrapping algorithm within a change point detection software. We interpolated the average monthly retreat of each terminus in order to calculate the average seasonal amplitude of each year. We found the 11 glaciers in Uummannaq Bay retreated an average of -1.26 ± 1.36 km, while the seven glaciers in Disko Bay averaged -1.13 ± 0.82 km. The majority of glaciers retreated, yet we see no latitudinal trend in magnitude of retreat on either a seasonal or long-term scale. We observe change points in the annual maximum retreat of four glacier termini in Uummannaq Bay and one in Disko Bay which are generally coincident with increased summer sea surface temperatures. In some cases, we observed smaller interannual variability in the average seasonal amplitude of years leading up to a critical threshold, followed by an increase in seasonal variability in the year prior and throughout the regime shift, until returning to a similar range of variability observed prior to the shift. As such, our findings may provide a method to predict an approaching change point at glacier termini which have not yet crossed a critical threshold through observations of increases in seasonal amplitude variability.
Recknagel, Friedrich; Orr, Philip T; Cao, Hongqing
2014-01-01
Seven-day-ahead forecasting models of Cylindrospermopsis raciborskii in three warm-monomictic and mesotrophic reservoirs in south-east Queensland have been developed by means of water quality data from 1999 to 2010 and the hybrid evolutionary algorithm HEA. Resulting models using all measured variables as inputs as well as models using electronically measurable variables only as inputs forecasted accurately timing of overgrowth of C. raciborskii and matched well high and low magnitudes of observed bloom events with 0.45≤r 2 >0.61 and 0.4≤r 2 >0.57, respectively. The models also revealed relationships and thresholds triggering bloom events that provide valuable information on synergism between water quality conditions and population dynamics of C. raciborskii. Best performing models based on using all measured variables as inputs indicated electrical conductivity (EC) within the range of 206-280mSm -1 as threshold above which fast growth and high abundances of C. raciborskii have been observed for the three lakes. Best models based on electronically measurable variables for the Lakes Wivenhoe and Somerset indicated a water temperature (WT) range of 25.5-32.7°C within which fast growth and high abundances of C. raciborskii can be expected. By contrast the model for Lake Samsonvale highlighted a turbidity (TURB) level of 4.8 NTU as indicator for mass developments of C. raciborskii. Experiments with online measured water quality data of the Lake Wivenhoe from 2007 to 2010 resulted in predictive models with 0.61≤r 2 >0.65 whereby again similar levels of EC and WT have been discovered as thresholds for outgrowth of C. raciborskii. The highest validity of r 2 =0.75 for an in situ data-based model has been achieved after considering time lags for EC by 7 days and dissolved oxygen by 1 day. These time lags have been discovered by a systematic screening of all possible combinations of time lags between 0 and 10 days for all electronically measurable variables. The so-developed model performs seven-day-ahead forecasts and is currently implemented and tested for early warning of C. raciborskii blooms in the Wivenhoe reservoir. Copyright © 2013 Elsevier B.V. All rights reserved.
Rexhepaj, Elton; Brennan, Donal J; Holloway, Peter; Kay, Elaine W; McCann, Amanda H; Landberg, Goran; Duffy, Michael J; Jirstrom, Karin; Gallagher, William M
2008-01-01
Manual interpretation of immunohistochemistry (IHC) is a subjective, time-consuming and variable process, with an inherent intra-observer and inter-observer variability. Automated image analysis approaches offer the possibility of developing rapid, uniform indicators of IHC staining. In the present article we describe the development of a novel approach for automatically quantifying oestrogen receptor (ER) and progesterone receptor (PR) protein expression assessed by IHC in primary breast cancer. Two cohorts of breast cancer patients (n = 743) were used in the study. Digital images of breast cancer tissue microarrays were captured using the Aperio ScanScope XT slide scanner (Aperio Technologies, Vista, CA, USA). Image analysis algorithms were developed using MatLab 7 (MathWorks, Apple Hill Drive, MA, USA). A fully automated nuclear algorithm was developed to discriminate tumour from normal tissue and to quantify ER and PR expression in both cohorts. Random forest clustering was employed to identify optimum thresholds for survival analysis. The accuracy of the nuclear algorithm was initially confirmed by a histopathologist, who validated the output in 18 representative images. In these 18 samples, an excellent correlation was evident between the results obtained by manual and automated analysis (Spearman's rho = 0.9, P < 0.001). Optimum thresholds for survival analysis were identified using random forest clustering. This revealed 7% positive tumour cells as the optimum threshold for the ER and 5% positive tumour cells for the PR. Moreover, a 7% cutoff level for the ER predicted a better response to tamoxifen than the currently used 10% threshold. Finally, linear regression was employed to demonstrate a more homogeneous pattern of expression for the ER (R = 0.860) than for the PR (R = 0.681). In summary, we present data on the automated quantification of the ER and the PR in 743 primary breast tumours using a novel unsupervised image analysis algorithm. This novel approach provides a useful tool for the quantification of biomarkers on tissue specimens, as well as for objective identification of appropriate cutoff thresholds for biomarker positivity. It also offers the potential to identify proteins with a homogeneous pattern of expression.
NASA Astrophysics Data System (ADS)
Simpson, G. L.
2015-12-01
Studying threshold responses to environmental change is often made difficult due to the paucity of monitoring data prior to and during change. Progress has been made via theoretical models of regime shifts or experimental manipulation but natural, real world, examples of threshold change are limited and in many cases inconclusive. Lake sediments provide the potential to examine abrupt ecological change by directly observing how species, communities, and biogeochemical proxies responded to environmental perturbation or recorded ecosystem change. These records are not problem-free; age uncertainties, uneven and variable temporal resolution, and time-consuming taxonomic work all act to limit the scope and scale of the data or complicate its analysis. Here I use two annually laminated records 1. Kassjön, a seasonally anoxic mesotrophic lake in N Sweden, and2. Baldeggersee, a nutrient rich, hardwater lake on the central Swiss Plateau to investigate lake ecosystem responses to abrupt environmental change using ideal paleoecological time series. Rapid cooling 2.2kyr ago in northern Sweden significantly perturbed the diatom community of Kassjön. Using wavelet analysis, this amelioration in climate also fundamentally altered patterns of variance in diatom abundances, suppressing cyclicity in species composition that required several hundred years to reestablish. Multivariate wavelet analysis of the record showed marked switching between synchronous and asynchronous species dynamics in response to rapid climatic cooling and subsequent warming. Baldeggersee has experienced a long history of eutrophication and the diatom record has been used as a classic illustration of a regime shift in response to nutrient loading. Time series analysis of the record identified some evidence of a threshold-like response in the diatoms. A stochastic volatility model identified increasing variance in composition prior to the threshold, as predicted from theory, and a switch from compensatory to synchronous species dynamics, concomitant with eutrophication, was observed. These results document in high resolution how two aquatic systems reacted to abrupt change and demonstrate that under ideal conditions sediments can preserve valuable evidence of rapid ecological change.
Productivity responses of desert vegetation to precipitation patterns across a rainfall gradient.
Li, Fang; Zhao, Wenzhi; Liu, Hu
2015-03-01
The influences of previous-year precipitation and episodic rainfall events on dryland plants and communities are poorly quantified in the temperate desert region of Northwest China. To evaluate the thresholds and lags in the response of aboveground net primary productivity (ANPP) to variability in rainfall pulses and seasonal precipitation along the precipitation-productivity gradient in three desert ecosystems with different precipitation regimes, we collected precipitation data from 2000 to 2012 in Shandan (SD), Linze (LZ) and Jiuquan (JQ) in northwestern China. Further, we extracted the corresponding MODIS Normalized Difference Vegetation Index (NDVI, a proxy for ANPP) datasets at 250 m spatial resolution. We then evaluated different desert ecosystems responses using statistical analysis, and a threshold-delay model (TDM). TDM is an integrative framework for analysis of plant growth, precipitation thresholds, and plant functional type strategies that capture the nonlinear nature of plant responses to rainfall pulses. Our results showed that: (1) the growing season NDVIINT (INT stands for time-integrated) was largely correlated with the warm season (spring/summer) at our mildly-arid desert ecosystem (SD). The arid ecosystem (LZ) exhibited a different response, and the growing season NDVIINT depended highly on the previous year's fall/winter precipitation and ANPP. At the extremely arid site (JQ), the variability of growing season NDVIINT was equally correlated with the cool- and warm-season precipitation; (2) some parameters of threshold-delay differed among the three sites: while the response of NDVI to rainfall pulses began at about 5 mm for all the sites, the maximum thresholds in SD, LZ, and JQ were about 55, 35 and 30 mm respectively, increasing with an increase in mean annual precipitation. By and large, more previous year's fall/winter precipitation, and large rainfall events, significantly enhanced the growth of desert vegetation, and desert ecosystems should be much more adaptive under likely future scenarios of increasing fall/winter precipitation and large rainfall events. These results highlight the inherent complexity in predicting how desert ecosystems will respond to future fluctuations in precipitation.
Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea
Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert
2017-01-01
Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
Georges, Arthur
1989-11-01
Mean daily temperature in natural nests of freshwater turtles with temperature-dependent sex determination is known to be a poor predictor of hatchling sex ratios when nest temperatures fluctuate. To account for this, a model was developed on the assumption that females will emerge from eggs when more than half of embryonic development occurs above the threshold temperature for sex determination rather than from eggs that spend more than half their time above the threshold. The model is consistent with previously published data and in particular explains the phenomenon whereby the mean temperature that best distinguishes between male and female nests decreases with increasing variability in nest temperature. The model, if verified by controlled experiments, has important implications for our understanding of temperature-dependent sex determination in natural nests. Both mean nest temperature and "hours spent above the threshold" will be poor predictors of hatchling sex ratios. Studies designed to investigate latitudinal trends and inter-specific differences in the threshold temperature will need to consider latitudinal and inter-specific variation in the magnitude of diel fluctuations in nest temperature, and variation in factors influencing the magnitude of those fluctuations, such as nest depth. Furthermore, any factor that modifies the relationship between developmental rate and temperature can be expected to influence hatchling sex ratios in natural nests, especially when nest temperatures are close to the threshold.
2009-01-01
Background Airports represent a complex source type of increasing importance contributing to air toxics risks. Comprehensive atmospheric dispersion models are beyond the scope of many applications, so it would be valuable to rapidly but accurately characterize the risk-relevant exposure implications of emissions at an airport. Methods In this study, we apply a high resolution atmospheric dispersion model (AERMOD) to 32 airports across the United States, focusing on benzene, 1,3-butadiene, and benzo [a]pyrene. We estimate the emission rates required at these airports to exceed a 10-6 lifetime cancer risk for the maximally exposed individual (emission thresholds) and estimate the total population risk at these emission rates. Results The emission thresholds vary by two orders of magnitude across airports, with variability predicted by proximity of populations to the airport and mixing height (R2 = 0.74–0.75 across pollutants). At these emission thresholds, the population risk within 50 km of the airport varies by two orders of magnitude across airports, driven by substantial heterogeneity in total population exposure per unit emissions that is related to population density and uncorrelated with emission thresholds. Conclusion Our findings indicate that site characteristics can be used to accurately predict maximum individual risk and total population risk at a given level of emissions, but that optimizing on one endpoint will be non-optimal for the other. PMID:19426510
Normative behavioral thresholds for short tone-bursts.
Beattie, R C; Rochverger, I
2001-10-01
Although tone-bursts have been commonly used in auditory brainstem response (ABR) evaluations for many years, national standards describing normal calibration values have not been established. This study was designed to gather normative threshold data to establish a physical reference for tone-burst stimuli that can be reproduced across clinics and laboratories. More specifically, we obtained norms for 3-msec tone-bursts presented at two repetition rates (9.3/sec and 39/sec), two gating functions (Trapezoid and Blackman), and four frequencies (500, 1000, 2000, and 4000 Hz). Our results are specified using three physical references: dB peak sound pressure level, dB peak-to-peak equivalent sound pressure level, and dB SPL (fast meter response, rate = 50 stimuli/sec). These data are offered for consideration when calibrating ABR equipment. The 39/sec stimulus rate yielded tone-burst thresholds that were approximately 3 dB lower than the 9.3/sec rate. The improvement in threshold with increasing stimulus rate may reflect the ability of the auditory system to integrate energy that occurs within a time interval of 200 to 500 msec (temporal integration). The Trapezoid gating function yielded thresholds that averaged 1.4 dB lower than the Blackman function. Although these differences are small and of little clinical importance, the cumulative effects of several instrument and/or procedural variables may yield clinically important differences.
NASA Astrophysics Data System (ADS)
Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.
2014-12-01
Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013 Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, S. C. Chapman, N. W. Watkins, 2013 Environ. Res. Lett. 8, 034031 [2] Haylock et al. 2008 J. Geophys. Res (Atmospheres), 113, D20119
Immobilization thresholds of electrofishing relative to fish size
Dolan, C.R.; Miranda, L.E.
2003-01-01
Fish size and electrical waveforms have frequently been associated with variation in electrofishing effectiveness. Under controlled laboratory conditions, we measured the electrical power required by five electrical waveforms to immobilize eight fish species of diverse sizes and shapes. Fish size was indexed by total body length, surface area, volume, and weight; shape was indexed by the ratio of body length to body depth. Our objectives were to identify immobilization thresholds, elucidate the descriptors of fish size that were best associated with those immobilization thresholds, and determine whether the vulnerability of a species relative to other species remained constant across electrical treatments. The results confirmed that fish size is a key variable controlling the immobilization threshold and further suggested that the size descriptor best related to immobilization is fish volume. The peak power needed to immobilize fish decreased rapidly with increasing fish volume in small fish but decreased slowly for fish larger than 75-100 cm 3. Furthermore, when we controlled for size and shape, different waveforms did not favor particular species, possibly because of the overwhelming effect of body size. Many of the immobilization inconsistencies previously attributed to species might simply represent the effect of disparities in body size.
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
Temporary threshold shift after impulse-noise during video game play: laboratory data.
Spankovich, C; Griffiths, S K; Lobariñas, E; Morgenstein, K E; de la Calle, S; Ledon, V; Guercio, D; Le Prell, C G
2014-03-01
Prevention of temporary threshold shift (TTS) after laboratory-based exposure to pure-tones, broadband noise, and narrowband noise signals has been achieved, but prevention of TTS under these experimental conditions may not accurately reflect protection against hearing loss following impulse noise. This study used a controlled laboratory-based TTS paradigm that incorporated impulsive stimuli into the exposure protocol; development of this model could provide a novel platform for assessing proposed therapeutics. Participants played a video game that delivered gunfire-like sound through headphones as part of a target practice game. Effects were measured using audiometric threshold evaluations and distortion product otoacoustic emissions (DPOAEs). The sound level and number of impulses presented were sequentially increased throughout the study. Participants were normal-hearing students at the University of Florida who provided written informed consent prior to participation. TTS was not reliably induced by any of the exposure conditions assessed here. However, there was significant individual variability, and a subset of subjects showed TTS under some exposure conditions. A subset of participants demonstrated reliable threshold shifts under some conditions. Additional experiments are needed to better understand and optimize stimulus parameters that influence TTS after simulated impulse noise.
Anaerobic threshold determination through ventilatory and electromyographics parameters.
Gassi, E R; Bankoff, A D P
2010-01-01
The aim of present study was to compare the alterations in electromyography signs with Ventilatory Threshold (VT). Had been part of the study eight men, amateur cyclists and triathletes (25.25 +/- 6.96 years), that they had exercised themselves in a mechanical cicloergometer, a cadence of 80 RPM and with the increased intensity being in 25 W/min until the exhaustion. The VT was determined by a non-linear increase in VE/VO2 without any increase in VE/VCO2 and compared with the intensity corresponding to break point of amplitude EMG sign during the incremental exercise. The EMG--Fatigue Threshold (FT) and Ventilatory Threshold (VT) parameters used were the power, the time, absolute and relative VO2, ventilation (VE), the heart hate (HH) and the subjective perception of the effort. The results had not shown to difference in none of the variable selected for the corresponding intensity to VT and FT--EMG of the muscles lateralis vastus and femoris rectus. The parameters used in the comparison between the electromyographic indicators and ventilatory were the load, the time, absolute VO2 and relative to corporal mass, to ventilation (VE), the heart frequency (HH) and the Subjective Perception of the Effort (SPE).
Temporary threshold shift after impulse-noise during video game play: Laboratory data
Spankovich, C.; Griffiths, S. K.; Lobariñas, E.; Morgenstein, K.E.; de la Calle, S.; Ledon, V.; Guercio, D.; Le Prell, C.G.
2015-01-01
Objective Prevention of temporary threshold shift (TTS) after laboratory-based exposure to pure-tones, broadband noise, and narrow band noise signals has been achieved, but prevention of TTS under these experimental conditions may not accurately reflect protection against hearing loss following impulse noise. This study used a controlled laboratory-based TTS paradigm that incorporated impulsive stimuli into the exposure protocol; development of this model could provide a novel platform for assessing proposed therapeutics. Design Participants played a video game that delivered gunfire-like sound through headphones as part of a target practice game. Effects were measured using audiometric threshold evaluations and distortion product otoacoustic emissions (DPOAEs). The sound level and number of impulses presented were sequentially increased throughout the study. Study sample Participants were normal-hearing students at the University of Florida who provided written informed consent prior to participation. Results TTS was not reliably induced by any of the exposure conditions assessed here. However, there was significant individual variability, and a subset of subjects showed TTS under some exposure conditions. Conclusions A subset of participants demonstrated reliable threshold shifts under some conditions. Additional experiments are needed to better understand and optimize stimulus parameters that influence TTS after simulated impulse noise. PMID:24564694
The influence of thresholds on the risk assessment of carcinogens in food.
Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian
2009-08-01
The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer lies on a critical pathway that is thresholded.
When Is a Sprint a Sprint? A Review of the Analysis of Team-Sport Athlete Activity Profile
Sweeting, Alice J.; Cormack, Stuart J.; Morgan, Stuart; Aughey, Robert J.
2017-01-01
The external load of a team-sport athlete can be measured by tracking technologies, including global positioning systems (GPS), local positioning systems (LPS), and vision-based systems. These technologies allow for the calculation of displacement, velocity and acceleration during a match or training session. The accurate quantification of these variables is critical so that meaningful changes in team-sport athlete external load can be detected. High-velocity running, including sprinting, may be important for specific team-sport match activities, including evading an opponent or creating a shot on goal. Maximal accelerations are energetically demanding and frequently occur from a low velocity during team-sport matches. Despite extensive research, conjecture exists regarding the thresholds by which to classify the high velocity and acceleration activity of a team-sport athlete. There is currently no consensus on the definition of a sprint or acceleration effort, even within a single sport. The aim of this narrative review was to examine the varying velocity and acceleration thresholds reported in athlete activity profiling. The purposes of this review were therefore to (1) identify the various thresholds used to classify high-velocity or -intensity running plus accelerations; (2) examine the impact of individualized thresholds on reported team-sport activity profile; (3) evaluate the use of thresholds for court-based team-sports and; (4) discuss potential areas for future research. The presentation of velocity thresholds as a single value, with equivocal qualitative descriptors, is confusing when data lies between two thresholds. In Australian football, sprint efforts have been defined as activity >4.00 or >4.17 m·s−1. Acceleration thresholds differ across the literature, with >1.11, 2.78, 3.00, and 4.00 m·s−2 utilized across a number of sports. It is difficult to compare literature on field-based sports due to inconsistencies in velocity and acceleration thresholds, even within a single sport. Velocity and acceleration thresholds have been determined from physical capacity tests. Limited research exists on the classification of velocity and acceleration data by female team-sport athletes. Alternatively, data mining techniques may be used to report team-sport athlete external load, without the requirement of arbitrary or physiologically defined thresholds. PMID:28676767
Kuhtz-Buschbeck, Johann P; Andresen, Wiebke; Göbel, Stephan; Gilster, René; Stick, Carsten
2010-06-01
About four decades ago, Perl and collaborators were the first ones who unambiguously identified specifically nociceptive neurons in the periphery. In their classic work, they recorded action potentials from single C-fibers of a cutaneous nerve in cats while applying carefully graded stimuli to the skin (Bessou P, Perl ER. Response of cutaneous sensory units with unmyelinated fibers to noxious stimuli. J Neurophysiol 32: 1025-1043, 1969). They discovered polymodal nociceptors, which responded to mechanical, thermal, and chemical stimuli in the noxious range, and differentiated them from low-threshold thermoreceptors. Their classic findings form the basis of the present method that undergraduate medical students experience during laboratory exercises of sensory physiology, namely, quantitative testing of the thermal detection and pain thresholds. This diagnostic method examines the function of thin afferent nerve fibers. We collected data from nearly 300 students that showed that 1) women are more sensitive to thermal detection and thermal pain at the thenar than men, 2) habituation shifts thermal pain thresholds during repetititve testing, 3) the cold pain threshold is rather variable and lower when tested after heat pain than in the reverse case (order effect), and 4) ratings of pain intensity on a visual analog scale are correlated with the threshold temperature for heat pain but not for cold pain. Median group results could be reproduced in a retest. Quantitative sensory testing of thermal thresholds is feasible and instructive in the setting of a laboratory exercise and is appreciated by the students as a relevant and interesting technique.
Developmental trends in infant temporal processing speed.
Saint, Sarah E; Hammond, Billy R; O'Brien, Kevin J; Frick, Janet E
2017-09-01
Processing speed, which can be measured behaviorally in various sensory domains, has been shown to be a strong marker of central nervous system health and functioning in adults. Visual temporal processing speed (measured via critical flicker fusion [CFF] thresholds) represents the maximum speed at which the visual system can detect changes. Previous studies of infant CFF development have been limited and inconsistent. The present study sought to characterize the development of CFF thresholds in the first year of life using a larger sample than previous studies and a repeated measures design (in Experiment 2) to control for individual differences. Experiment 1 (n=44 infants and n=24 adults) used a cross-sectional design aimed at examining age-related changes that exist in CFF thresholds across infants during the first year of life. Adult data were collected to give context to infant CFF thresholds obtained under our specific stimulus conditions. Experiment 2 (N=28) used a repeated-measures design to characterize the developmental trajectory of infant CFF thresholds between three and six months of age, based on the results of Experiment 1. Our results reveal a general increase in CFF from three to four and one-half months of age, with a high degree of variability within each age group. Infant CFF thresholds at 4.5months of age were not significantly different from the adult average, though a regression analysis of the data from Experiment 2 predicted that infants would reach the adult average closer to 6months of age. Developmental and clinical implications of these data are discussed. Published by Elsevier Ltd.
Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.
2015-12-01
Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.
Mechanical sensibility in free and island flaps of the foot.
Rautio, J; Kekoni, J; Hämäläinen, H; Härmä, M; Asko-Seljavaara, S
1989-04-01
Mechanical sensibility in 20 free skin flaps and four dorsalis pedis island flaps, used for the reconstruction of foot defects, was analyzed with conventional clinical methods and by determining sensibility thresholds to vibration frequencies of 20, 80, and 240 Hz. To eliminate inter-individual variability, a score was calculated for each frequency by dividing the thresholds determined for each flap with values obtained from the corresponding area on the uninjured foot. The soft tissue stability of the reconstruction was assessed. Patients were divided into three groups according to the scores. In the group of flaps with the best sensibility, the threshold increases were low at all frequencies. In the group with intermediate sensibility, the relative threshold increases were greater, the higher the frequency. In the group with the poorest sensibility, no thresholds were obtained with 240 Hz frequency and the thresholds increases were very high at all frequencies. Sensibility was not related to the length of follow-up time, nor to the type or size of the flap. However, flap sensibility was closely associated with that of the recipient area, where sensibility was usually inferior to that of normal skin. The island flaps generally had better sensibility than the free flaps. There was a good correspondence between the levels of sensibility determined by clinical and quantitative methods. The quantitative data on the level of sensibility obtained with the psychophysical method were found to be reliable and free from observer bias, and are therefore recommended for future studies. The degree of sensibility may have contributed to, but was not essential for, good soft-tissue stability of the reconstruction.
Xiao, Jianpeng; Liu, Tao; Lin, Hualiang; Zhu, Guanghu; Zeng, Weilin; Li, Xing; Zhang, Bing; Song, Tie; Deng, Aiping; Zhang, Meng; Zhong, Haojie; Lin, Shao; Rutherford, Shannon; Meng, Xiaojing; Zhang, Yonghui; Ma, Wenjun
2018-05-15
To investigate the periodicity of dengue and the relationship between weather variables, El Niño Southern Oscillation (ENSO) and dengue incidence in Guangdong Province, China. Guangdong monthly dengue incidence and weather data and El Niño index information for 1988 to 2015 were collected. Wavelet analysis was used to investigate the periodicity of dengue, and the coherence and time-lag phases between dengue and weather variables and ENSO. The Generalized Additive Model (GAM) approach was further employed to explore the dose-response relationship of those variables on dengue. Finally, random forest analysis was applied to measure the relative importance of the climate predictors. Dengue in Guangdong has a dominant annual periodicity over the period 1988-2015. Mean minimum temperature, total precipitation, and mean relative humidity are positively related to dengue incidence for 2, 3, and 4months lag, respectively. ENSO in the previous 12months may have driven the dengue epidemics in 1995, 2002, 2006 and 2010 in Guangdong. GAM analysis indicates an approximate linear association for the temperature-dengue relationship, approximate logarithm curve for the humidity-dengue relationship, and an inverted U-shape association for the precipitation-dengue (the threshold of precipitation is 348mm per month) and ENSO-dengue relationships (the threshold of ENSO index is 0.6°C). The monthly mean minimum temperature in the previous two months was identified as the most important climate variable associated with dengue epidemics in Guangdong Province. Our study suggests weather factors and ENSO are important predictors of dengue incidence. These findings provide useful evidence for early warning systems to help to respond to the global expansion of dengue fever. Copyright © 2017 Elsevier B.V. All rights reserved.
Fried, Peter J.; Jannati, Ali; Davila-Pérez, Paula; Pascual-Leone, Alvaro
2017-01-01
Background: Transcranial magnetic stimulation (TMS) can be used to assess neurophysiology and the mechanisms of cortical brain plasticity in humans in vivo. As the use of these measures in specific populations (e.g., Alzheimer’s disease; AD) increases, it is critical to understand their reproducibility (i.e., test–retest reliability) in the populations of interest. Objective: Reproducibility of TMS measures was evaluated in older adults, including healthy, AD, and Type-2 diabetes mellitus (T2DM) groups. Methods: Participants received two identical neurophysiological assessments within a year including motor thresholds, baseline motor evoked potentials (MEPs), short- and long-interval intracortical inhibition (SICI, LICI) and intracortical facilitation (ICF), and MEP changes following intermittent theta-burst stimulation (iTBS). Cronbach’s α coefficients were calculated to assess reproducibility. Multiple linear regression analyses were used to investigate factors related to intraindividual variability. Results: Reproducibility was highest for motor thresholds, followed by baseline MEPs, SICI and LICI, and was lowest for ICF and iTBS aftereffects. The AD group tended to show higher reproducibility than T2DM or controls. Intraindividual variability of baseline MEPs was related to age and variability of RMT, while the intraindividual variability in post-iTBS measures was related to baseline MEP variability, intervisit duration, and Brain-derived neurotrophic factor (BDNF) polymorphism. Conclusion: Increased reproducibility in AD may reflect pathophysiological declines in the efficacy of neuroplastic mechanisms. Reproducibility of iTBS aftereffects can be improved by keeping baseline MEPs consistent, controlling for BDNF genotype, and waiting at least a week between visits. Significance: These findings provide the first direct assessment of reproducibility of TMS measures in older clinical populations. Reproducibility coefficients may be used to adjust effect- and sample size calculations for future studies. PMID:28871222
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kakarala, Bharat, E-mail: bkakara1@jhmi.edu, E-mail: bharat.kakarala@gmail.com; Frangakis, Constantine E., E-mail: cfrangak@jhsph.edu; Rodriguez, Ron, E-mail: rodriguezr32@uthscsa.edu
PurposeCryoablation of renal tumors is assumed to have a higher risk of hemorrhagic complications compared to other ablative modalities. Our purpose was to establish the exact risk and to identify hemorrhagic risk factors.Materials and MethodsThis IRB approved, 7-year prospective study included 261 renal cryoablations. Procedures were under conscious sedation and CT guidance. Pre- and postablation CT was obtained, and hemorrhagic complications were CTCAE tabulated. Age, gender, tumor size, histology, and probes number were tested based on averages or proportions using their exact permutation distribution. “High-risk” subgroups (those exceeding the thresholds of all variables) were tested for each variable alone, andmore » for all combinations of variable threshold values. We compared the subgroup with the best PPV using one variable, with the subgroup with the best PPV using all variables (McNemmar test).ResultsThe hemorrhagic complication rate was 3.5 %. Four patients required transfusions, two required emergent angiograms, one required both a transfusion and angiogram, and two required bladder irrigation for outlet obstruction. Perirenal space hemorrhage was more clinically significant than elsewhere. Univariate risks were tumor size >2 cm, number of probes >2, and malignant histology (P = 0.005, 0.002, and 0.033, respectively). Multivariate analysis showed that patients >55 years with malignant tumors >2 cm requiring 2 or more probes yielded the highest PPV (7.5 %).ConclusionsAlthough older patients (>55 years old) with larger (>2 cm), malignant tumors have an increased risk of hemorrhagic complications, the low PPV does not support the routine use of embolization. Percutaneous cryoablation has a 3.5 % risk of significant hemorrhage, similar to that reported for other types of renal ablative modalities.« less
Changes In The Heating Degree-days In Norway Due Toglobal Warming
NASA Astrophysics Data System (ADS)
Skaugen, T. E.; Tveito, O. E.; Hanssen-Bauer, I.
A continuous spatial representation of temperature improves the possibility topro- duce maps of temperature-dependent variables. A temperature scenario for the period 2021-2050 is obtained for Norway from the Max-Planck-Institute? AOGCM, GSDIO ECHAM4/OPEC 3. This is done by an ?empirical downscaling method? which in- volves the use of empirical links between large-scale fields and local variables to de- duce estimates of the local variables. The analysis is obtained at forty-six sites in Norway. Spatial representation of the anomalies of temperature in the scenario period compared to the normal period (1961-1990) is obtained with the use of spatial interpo- lation in a GIS. The temperature scenario indicates that we will have a warmer climate in Norway in the future, especially during the winter season. The heating degree-days (HDD) is defined as the accumulated Celsius degrees be- tween the daily mean temperature and a threshold temperature. For Scandinavian countries, this threshold temperature is 17 Celsius degrees. The HDD is found to be a good estimate of accumulated cold. It is therefore a useful index for heating energy consumption within the heating season, and thus to power production planning. As a consequence of the increasing temperatures, the length of the heating season and the HDD within this season will decrease in Norway in the future. The calculations of the heating season and the HDD is estimated at grid level with the use of a GIS. The spatial representation of the heating season and the HDD can then easily be plotted. Local information of the variables being analysed can be withdrawn from the spatial grid in a GIS. The variable is prepared for further spatial analysis. It may also be used as an input to decision making systems.
Ammann, Claudia; Lindquist, Martin A; Celnik, Pablo A
It is well known that transcranial direct current stimulation (tDCS) is capable of modulating corticomotor excitability. However, a source of growing concern has been the observed inter- and intra-individual variability of tDCS-responses. Recent studies have assessed whether individuals respond in a predictable manner across repeated sessions of anodal tDCS (atDCS). The findings of these investigations have been inconsistent, and their methods have some limitations (i.e. lack of sham condition or testing only one tDCS intensity). To study inter- and intra-individual variability of atDCS effects at two different intensities on primary motor cortex (M1) excitability. Twelve subjects participated in a crossover study testing 7-min atDCS over M1 in three separate conditions (2 mA, 1 mA, sham) each repeated three times separated by 48 h. Motor evoked potentials were recorded before and after stimulation (up to 30min). Time of testing was maintained consistent within participants. To estimate the reliability of tDCS effects across sessions, we calculated the Intra-class Correlation Coefficient (ICC). AtDCS at 2 mA, but not 1 mA, significantly increased cortical excitability at the group level in all sessions. The overall ICC revealed fair to high reliability of tDCS effects for multiple sessions. Given that the distribution of responses showed important variability in the sham condition, we established a Sham Variability-Based Threshold to classify responses and to track individual changes across sessions. Using this threshold an intra-individual consistent response pattern was then observed only for the 2 mA condition. 2 mA anodal tDCS results in consistent intra- and inter-individual increases of M1 excitability. Copyright © 2017 Elsevier Inc. All rights reserved.
Interflow dynamics on a low relief forested hillslope: Lots of fill, little spill
Du, Enhao; Rhett Jackson, C.; Klaus, Julian; ...
2016-01-27
In this paper, we evaluated the occurrence of perching and interflow over and within a sandy clay loam argillic horizon within first-order, low-relief, forested catchments at the Savannah River Site (SRS) in the Upper Coastal Plain of South Carolina. We measured soil hydraulic properties, depths to the argillic layer, soil moisture, shallow groundwater behavior, interflow interception trench flows, and streamflow over a 4-year period to explore the nature and variability of soil hydraulic characteristics, the argillic “topography”, and their influence on interflow generation. Perching occurred frequently within and above the restricting argillic horizons during our monitoring period, but interflow wasmore » infrequent due to microtopographic relief and associated depression storage on the argillic layer surface. High percolation rates through the argillic horizon, particularly through soil anomalies, also reduced the importance of interflow. Interflow generation was highly variable across eleven segments of a 121 m interception trench. Hillslopes were largely disconnected from stream behavior during storms. Hillslope processes were consistent with the fill-and-spill hypothesis and featured a sequence of distinct thresholds: vertical wetting front propagation to the argillic layer; saturation of the argillic followed by local perching; filling of argillic layer depressions; and finally connectivity of depressions leading to interflow generation. Lastly, analysis of trench flow data indicated a cumulative rainfall threshold of 60 mm to generate interflow, a value at the high end of the range of thresholds reported elsewhere.« less
Biodiversity response to natural gradients of multiple stressors on continental margins
Sperling, Erik A.; Frieder, Christina A.; Levin, Lisa A.
2016-01-01
Sharp increases in atmospheric CO2 are resulting in ocean warming, acidification and deoxygenation that threaten marine organisms on continental margins and their ecological functions and resulting ecosystem services. The relative influence of these stressors on biodiversity remains unclear, as well as the threshold levels for change and when secondary stressors become important. One strategy to interpret adaptation potential and predict future faunal change is to examine ecological shifts along natural gradients in the modern ocean. Here, we assess the explanatory power of temperature, oxygen and the carbonate system for macrofaunal diversity and evenness along continental upwelling margins using variance partitioning techniques. Oxygen levels have the strongest explanatory capacity for variation in species diversity. Sharp drops in diversity are seen as O2 levels decline through the 0.5–0.15 ml l−1 (approx. 22–6 µM; approx. 21–5 matm) range, and as temperature increases through the 7–10°C range. pCO2 is the best explanatory variable in the Arabian Sea, but explains little of the variance in diversity in the eastern Pacific Ocean. By contrast, very little variation in evenness is explained by these three global change variables. The identification of sharp thresholds in ecological response are used here to predict areas of the seafloor where diversity is most at risk to future marine global change, noting that the existence of clear regional differences cautions against applying global thresholds. PMID:27122565
Earth’s oxygen cycle and the evolution of animal life
Reinhard, Christopher T.; Planavsky, Noah J.; Olson, Stephanie L.; Lyons, Timothy W.; Erwin, Douglas H.
2016-01-01
The emergence and expansion of complex eukaryotic life on Earth is linked at a basic level to the secular evolution of surface oxygen levels. However, the role that planetary redox evolution has played in controlling the timing of metazoan (animal) emergence and diversification, if any, has been intensely debated. Discussion has gravitated toward threshold levels of environmental free oxygen (O2) necessary for early evolving animals to survive under controlled conditions. However, defining such thresholds in practice is not straightforward, and environmental O2 levels can potentially constrain animal life in ways distinct from threshold O2 tolerance. Herein, we quantitatively explore one aspect of the evolutionary coupling between animal life and Earth’s oxygen cycle—the influence of spatial and temporal variability in surface ocean O2 levels on the ecology of early metazoan organisms. Through the application of a series of quantitative biogeochemical models, we find that large spatiotemporal variations in surface ocean O2 levels and pervasive benthic anoxia are expected in a world with much lower atmospheric pO2 than at present, resulting in severe ecological constraints and a challenging evolutionary landscape for early metazoan life. We argue that these effects, when considered in the light of synergistic interactions with other environmental parameters and variable O2 demand throughout an organism’s life history, would have resulted in long-term evolutionary and ecological inhibition of animal life on Earth for much of Middle Proterozoic time (∼1.8–0.8 billion years ago). PMID:27457943
Ocular Vestibular Evoked Myogenic Potentials in Response to Three Test Positions and Two Frequencies
Todai, Janvi K.; Congdon, Sharon L.; Sangi-Haghpeykar, Haleh; Cohen, Helen S.
2014-01-01
Objective To determine how eye closure, test positions, and stimulus frequencies influence ocular vestibular evoked myogenic potentials. Study Design This study used a within-subjects repeated measures design. Methods Twenty asymptomatic subjects were each tested on ocular vestibular evoked myogenic potentials in three head/eye conditions at 500 Hz and 1000 Hz using air-conducted sound: 1) Sitting upright, head erect, eyes open, looking up. 2) Lying supine, neck flexed 30 degrees, eyes open and looking up. 3) Lying supine, neck flexed 30 degrees, eyes closed and relaxed. Four dependent variables measured were n10, p16, amplitude, and threshold. Results The supine position/ eyes open was comparable to sitting/ eyes open and better than supine/ eyes closed. Eyes closed resulted in lower amplitude, higher threshold, and prolonged latency. Significantly fewer subjects provided responses with eyes closed than with eyes open. No significant differences were found between both eyes open conditions. Both n10 and p16 were lower at 1000 Hz than at 500 Hz. Amplitude and threshold were higher at 1000 Hz than at 500 Hz. Conclusion Supine eyes open is a reliable alternative to sitting eyes open in patients who cannot maintain a seated position. Testing at 1000 Hz provides a larger response with a faster onset that fatigues faster than at 500 Hz. The increased variability and decreased response in the eyes closed position suggest that the eyes closed position is not reliable. PMID:24178911
Interflow dynamics on a low relief forested hillslope: Lots of fill, little spill
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Enhao; Rhett Jackson, C.; Klaus, Julian
In this paper, we evaluated the occurrence of perching and interflow over and within a sandy clay loam argillic horizon within first-order, low-relief, forested catchments at the Savannah River Site (SRS) in the Upper Coastal Plain of South Carolina. We measured soil hydraulic properties, depths to the argillic layer, soil moisture, shallow groundwater behavior, interflow interception trench flows, and streamflow over a 4-year period to explore the nature and variability of soil hydraulic characteristics, the argillic “topography”, and their influence on interflow generation. Perching occurred frequently within and above the restricting argillic horizons during our monitoring period, but interflow wasmore » infrequent due to microtopographic relief and associated depression storage on the argillic layer surface. High percolation rates through the argillic horizon, particularly through soil anomalies, also reduced the importance of interflow. Interflow generation was highly variable across eleven segments of a 121 m interception trench. Hillslopes were largely disconnected from stream behavior during storms. Hillslope processes were consistent with the fill-and-spill hypothesis and featured a sequence of distinct thresholds: vertical wetting front propagation to the argillic layer; saturation of the argillic followed by local perching; filling of argillic layer depressions; and finally connectivity of depressions leading to interflow generation. Lastly, analysis of trench flow data indicated a cumulative rainfall threshold of 60 mm to generate interflow, a value at the high end of the range of thresholds reported elsewhere.« less
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
Earth's oxygen cycle and the evolution of animal life.
Reinhard, Christopher T; Planavsky, Noah J; Olson, Stephanie L; Lyons, Timothy W; Erwin, Douglas H
2016-08-09
The emergence and expansion of complex eukaryotic life on Earth is linked at a basic level to the secular evolution of surface oxygen levels. However, the role that planetary redox evolution has played in controlling the timing of metazoan (animal) emergence and diversification, if any, has been intensely debated. Discussion has gravitated toward threshold levels of environmental free oxygen (O2) necessary for early evolving animals to survive under controlled conditions. However, defining such thresholds in practice is not straightforward, and environmental O2 levels can potentially constrain animal life in ways distinct from threshold O2 tolerance. Herein, we quantitatively explore one aspect of the evolutionary coupling between animal life and Earth's oxygen cycle-the influence of spatial and temporal variability in surface ocean O2 levels on the ecology of early metazoan organisms. Through the application of a series of quantitative biogeochemical models, we find that large spatiotemporal variations in surface ocean O2 levels and pervasive benthic anoxia are expected in a world with much lower atmospheric pO2 than at present, resulting in severe ecological constraints and a challenging evolutionary landscape for early metazoan life. We argue that these effects, when considered in the light of synergistic interactions with other environmental parameters and variable O2 demand throughout an organism's life history, would have resulted in long-term evolutionary and ecological inhibition of animal life on Earth for much of Middle Proterozoic time (∼1.8-0.8 billion years ago).
Liquefaction hazard for the region of Evansville, Indiana
Haase, Jennifer S.; Choi, Yoon S.; Nowack, Robert L.; Cramer, Chris H.; Boyd, Oliver S.; Bauer, Robert A.
2011-01-01
Maps of liquefaction hazard for each scenario earthquake present (1) Mean liquefaction potential index at each site, and (2) Probabilities that liquefaction potential index values exceed 5 (threshold for expression of surface liquefaction) and 12 (threshold for lateral spreading). Values for the liquefaction potential index are high in the River alluvium group, where the soil profiles are predominantly sand, while values in the Lacustrine terrace group are lower, owing to the predominance of clay. Liquefaction potential index values in the Outwash terrace group are less consistent because the soil profiles contain highly variable sequences of silty sand, clayey sand, and sandy clay, justifying the use of the Monte Carlo procedure to capture the consequences of this complexity.
Global seabird responses to forage fish depletion - One-third for the birds
Cury, Philippe M.; Boyd, Ian L.; Bonhommeau, Sylvain; Anker-Nilssen, Tycho; Crawford, Robert J.M.; Furness, Robert W.; Mills, James A.; Murphy, Eugene J.; Österblom, Henrik; Paleczny, Michelle; Piatt, John F.; Roux, Jean-Paul; Shannon, Lynne; Sydeman, William J.
2011-01-01
Determining the form of key predator-prey relationships is critical for understanding marine ecosystem dynamics. Using a comprehensive global database, we quantified the effect of fluctuations in food abundance on seabird breeding success. We identified a threshold in prey (fish and krill, termed “forage fish”) abundance below which seabirds experience consistently reduced and more variable productivity. This response was common to all seven ecosystems and 14 bird species examined within the Atlantic, Pacific, and Southern Oceans. The threshold approximated one-third of the maximum prey biomass observed in long-term studies. This provides an indicator of the minimal forage fish biomass needed to sustain seabird productivity over the long term.
Global seabird response to forage fish depletion--one-third for the birds.
Cury, Philippe M; Boyd, Ian L; Bonhommeau, Sylvain; Anker-Nilssen, Tycho; Crawford, Robert J M; Furness, Robert W; Mills, James A; Murphy, Eugene J; Osterblom, Henrik; Paleczny, Michelle; Piatt, John F; Roux, Jean-Paul; Shannon, Lynne; Sydeman, William J
2011-12-23
Determining the form of key predator-prey relationships is critical for understanding marine ecosystem dynamics. Using a comprehensive global database, we quantified the effect of fluctuations in food abundance on seabird breeding success. We identified a threshold in prey (fish and krill, termed "forage fish") abundance below which seabirds experience consistently reduced and more variable productivity. This response was common to all seven ecosystems and 14 bird species examined within the Atlantic, Pacific, and Southern Oceans. The threshold approximated one-third of the maximum prey biomass observed in long-term studies. This provides an indicator of the minimal forage fish biomass needed to sustain seabird productivity over the long term.
Addressable inverter matrix for process and device characterization
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Sayah, H. R.
1985-01-01
The addressable inverter matrix consists of 222 inverters each accessible with the aid of a shift register. The structure has proven useful in characterizing the variability of inverter transfer curves and in diagnosing processing faults. For good 3-micron CMOS bulk inverters investigated, the percent standard deviation of the inverter threshold voltage was less than one percent and the inverter gain (the slope of the inverter transfer curve at the inverter threshold vltage) was less than 3 percent. The average noise margin for the inverters was near 2 volts for a power supply voltage of 5 volts. The specific faults studied included undersize pull-down transistor widths and various open contacts in the matrix.
Addressable inverter matrix for process and device characterization
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Sayah, H. R.
1985-01-01
The addressable inverter matrix consists of 222 inverters each accessible with the aid of a shift register. The structure has proven useful in characterizing the variability of inverter transfer curves and in diagnosing processing faults. For good 3-micron CMOS bulk inverters investigated in this study, the percent standard deviation of the inverter threshold voltage was less than one percent and the inverter gain (the slope of the inverter transfer curve at the inverter threshold voltage) was less than 3 percent. The average noise margin for the inverters was near 2 volts for a power supply voltage of 5 volts. The specific faults studied included undersize pull-down transistor widths and various open contacts in the matrix.
Measures of Quantum Synchronization in Continuous Variable Systems
NASA Astrophysics Data System (ADS)
Mari, A.; Farace, A.; Didier, N.; Giovannetti, V.; Fazio, R.
2013-09-01
We introduce and characterize two different measures which quantify the level of synchronization of coupled continuous variable quantum systems. The two measures allow us to extend to the quantum domain the notions of complete and phase synchronization. The Heisenberg principle sets a universal bound to complete synchronization. The measure of phase synchronization is, in principle, unbounded; however, in the absence of quantum resources (e.g., squeezing) the synchronization level is bounded below a certain threshold. We elucidate some interesting connections between entanglement and synchronization and, finally, discuss an application based on quantum optomechanical systems.
Measures of quantum synchronization in continuous variable systems.
Mari, A; Farace, A; Didier, N; Giovannetti, V; Fazio, R
2013-09-06
We introduce and characterize two different measures which quantify the level of synchronization of coupled continuous variable quantum systems. The two measures allow us to extend to the quantum domain the notions of complete and phase synchronization. The Heisenberg principle sets a universal bound to complete synchronization. The measure of phase synchronization is, in principle, unbounded; however, in the absence of quantum resources (e.g., squeezing) the synchronization level is bounded below a certain threshold. We elucidate some interesting connections between entanglement and synchronization and, finally, discuss an application based on quantum optomechanical systems.
Swain, Kalpana; Pattnaik, Satyanarayan; Mallick, Subrata; Chowdary, Korla Appana
2009-01-01
In the present investigation, controlled release gastroretentive floating drug delivery system of theophylline was developed employing response surface methodology. A 3(2) randomized full factorial design was developed to study the effect of formulation variables like various viscosity grades and contents of hydroxypropyl methylcellulose (HPMC) and their interactions on response variables. The floating lag time for all nine experimental trial batches were less than 2 min and floatation time of more than 12 h. Theophylline release from the polymeric matrix system followed non-Fickian anomalous transport. Multiple regression analysis revealed that both viscosity and content of HPMC had statistically significant influence on all dependent variables but the effect of these variables found to be nonlinear above certain threshold values.
Deciphering factors controlling groundwater arsenic spatial variability in Bangladesh
NASA Astrophysics Data System (ADS)
Tan, Z.; Yang, Q.; Zheng, C.; Zheng, Y.
2017-12-01
Elevated concentrations of geogenic arsenic in groundwater have been found in many countries to exceed 10 μg/L, the WHO's guideline value for drinking water. A common yet unexplained characteristic of groundwater arsenic spatial distribution is the extensive variability at various spatial scales. This study investigates factors influencing the spatial variability of groundwater arsenic in Bangladesh to improve the accuracy of models predicting arsenic exceedance rate spatially. A novel boosted regression tree method is used to establish a weak-learning ensemble model, which is compared to a linear model using a conventional stepwise logistic regression method. The boosted regression tree models offer the advantage of parametric interaction when big datasets are analyzed in comparison to the logistic regression. The point data set (n=3,538) of groundwater hydrochemistry with 19 parameters was obtained by the British Geological Survey in 2001. The spatial data sets of geological parameters (n=13) were from the Consortium for Spatial Information, Technical University of Denmark, University of East Anglia and the FAO, while the soil parameters (n=42) were from the Harmonized World Soil Database. The aforementioned parameters were regressed to categorical groundwater arsenic concentrations below or above three thresholds: 5 μg/L, 10 μg/L and 50 μg/L to identify respective controlling factors. Boosted regression tree method outperformed logistic regression methods in all three threshold levels in terms of accuracy, specificity and sensitivity, resulting in an improvement of spatial distribution map of probability of groundwater arsenic exceeding all three thresholds when compared to disjunctive-kriging interpolated spatial arsenic map using the same groundwater arsenic dataset. Boosted regression tree models also show that the most important controlling factors of groundwater arsenic distribution include groundwater iron content and well depth for all three thresholds. The probability of a well with iron content higher than 5mg/L to contain greater than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be more than 91%, 85% and 51%, respectively, while the probability of a well from depth more than 160m to contain more than 5 μg/L, 10 μg/L and 50 μg/L As is estimated to be less than 38%, 25% and 14%, respectively.
Turbidity threshold sampling: Methods and instrumentation
Rand Eads; Jack Lewis
2001-01-01
Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...
Does Teacher Certification Program Lead to Better Quality Teachers? Evidence from Indonesia
ERIC Educational Resources Information Center
Kusumawardhani, Prita Nurmalia
2017-01-01
This paper examines the impact of the teacher certification program in Indonesia in 2007 and 2008 on student and teacher outcomes. I create a rule-based instrumental variable from discontinuities arising from the assignment mechanism of teachers into certification program. The thresholds are determined empirically. The study applies a two-sample…
Introducing Linear Functions: An Alternative Statistical Approach
ERIC Educational Resources Information Center
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
Amarillo, Yimy; Mato, Germán; Nadal, Marcela S.
2015-01-01
Thalamocortical neurons are involved in the generation and maintenance of brain rhythms associated with global functional states. The repetitive burst firing of TC neurons at delta frequencies (1–4 Hz) has been linked to the oscillations recorded during deep sleep and during episodes of absence seizures. To get insight into the biophysical properties that are the basis for intrinsic delta oscillations in these neurons, we performed a bifurcation analysis of a minimal conductance-based thalamocortical neuron model including only the IT channel and the sodium and potassium leak channels. This analysis unveils the dynamics of repetitive burst firing of TC neurons, and describes how the interplay between the amplifying variable mT and the recovering variable hT of the calcium channel IT is sufficient to generate low threshold oscillations in the delta band. We also explored the role of the hyperpolarization activated cationic current Ih in this reduced model and determine that, albeit not required, Ih amplifies and stabilizes the oscillation. PMID:25999847
Effect of randomness in logistic maps
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2015-01-01
We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However
Julien, Elizabeth; Boobis, Alan R; Olin, Stephen S
2009-09-01
The ILSI Research Foundation convened a cross-disciplinary working group to examine current approaches for assessing dose-response and identifying safe levels of intake or exposure for four categories of bioactive agents-food allergens, nutrients, pathogenic microorganisms, and environmental chemicals. This effort generated a common analytical framework-the Key Events Dose-Response Framework (KEDRF)-for systematically examining key events that occur between the initial dose of a bioactive agent and the effect of concern. Individual key events are considered with regard to factors that influence the dose-response relationship and factors that underlie variability in that relationship. This approach illuminates the connection between the processes occurring at the level of fundamental biology and the outcomes observed at the individual and population levels. Thus, it promotes an evidence-based approach for using mechanistic data to reduce reliance on default assumptions, to quantify variability, and to better characterize biological thresholds. This paper provides an overview of the KEDRF and introduces a series of four companion papers that illustrate initial application of the approach to a range of bioactive agents.
NASA Astrophysics Data System (ADS)
Hsu, Sheng-Chia; Li, Yiming
2014-11-01
In this work, we study the impact of random interface traps (RITs) at the interface of SiO x /Si on the electrical characteristic of 16-nm-gate high-κ/metal gate (HKMG) bulk fin-type field effect transistor (FinFET) devices. Under the same threshold voltage, the effects of RIT position and number on the degradation of electrical characteristics are clarified with respect to different levels of RIT density of state ( D it). The variability of the off-state current ( I off) and drain-induced barrier lowering (DIBL) will be severely affected by RITs with high D it varying from 5 × 1012 to 5 × 1013 eV-1 cm-2 owing to significant threshold voltage ( V th) fluctuation. The results of this study indicate that if the level of D it is lower than 1 × 1012 eV-1 cm-2, the normalized variability of the on-state current, I off, V th, DIBL, and subthreshold swing is within 5%.
Lazy workers are necessary for long-term sustainability in insect societies
Hasegawa, Eisuke; Ishii, Yasunori; Tada, Koichiro; Kobayashi, Kazuya; Yoshimura, Jin
2016-01-01
Optimality theory predicts the maximization of productivity in social insect colonies, but many inactive workers are found in ant colonies. Indeed, the low short-term productivity of ant colonies is often the consequence of high variation among workers in the threshold to respond to task-related stimuli. Why is such an inefficient strategy among colonies maintained by natural selection? Here, we show that inactive workers are necessary for the long-term sustainability of a colony. Our simulation shows that colonies with variable thresholds persist longer than those with invariable thresholds because inactive workers perform the critical function of replacing active workers when they become fatigued. Evidence of the replacement of active workers by inactive workers has been found in ant colonies. Thus, the presence of inactive workers increases the long-term persistence of the colony at the expense of decreasing short-term productivity. Inactive workers may represent a bet-hedging strategy in response to environmental stochasticity. PMID:26880339
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Rupam; Huang, Zhi-Feng; Nadgorny, Boris
Multiple percolation transitions are observed in a binary system of RuO{sub 2}-CaCu{sub 3}Ti{sub 4}O{sub 12} metal-semiconductor nanoparticle composites near percolation thresholds. Apart from a classical percolation transition, associated with the appearance of a continuous conductance path through RuO{sub 2} metal oxide nanoparticles, at least two additional tunneling percolation transitions are detected in this composite system. Such behavior is consistent with the recently emerged picture of a quantum conductivity staircase, which predicts several percolation tunneling thresholds in a system with a hierarchy of local tunneling conductance, due to various degrees of proximity of adjacent conducting particles distributed in an insulating matrix.more » Here, we investigate a different type of percolation tunneling staircase, associated with a more complex conductive and insulating particle microstructure of two types of non-spherical constituents. As tunneling is strongly temperature dependent, we use variable temperature measurements to emphasize the hierarchical nature of consecutive tunneling transitions. The critical exponents corresponding to specific tunneling percolation thresholds are found to be nonuniversal and temperature dependent.« less
Non-linear responses of glaciated prairie wetlands to climate warming
Johnson, W. Carter; Werner, Brett; Guntenspergen, Glenn R.
2016-01-01
The response of ecosystems to climate warming is likely to include threshold events when small changes in key environmental drivers produce large changes in an ecosystem. Wetlands of the Prairie Pothole Region (PPR) are especially sensitive to climate variability, yet the possibility that functional changes may occur more rapidly with warming than expected has not been examined or modeled. The productivity and biodiversity of these wetlands are strongly controlled by the speed and completeness of a vegetation cover cycle driven by the wet and dry extremes of climate. Two thresholds involving duration and depth of standing water must be exceeded every few decades or so to complete the cycle and to produce highly functional wetlands. Model experiments at 19 weather stations employing incremental warming scenarios determined that wetland function across most of the PPR would be diminished beyond a climate warming of about 1.5–2.0 °C, a critical temperature threshold range identified in other climate change studies.
Cross-contamination of foods and implications for food allergic patients.
Taylor, Steve L; Baumert, Joseph L
2010-07-01
Cross-contamination presents a risk of unknown magnitude for food allergic consumers. Published cases likely represent the tip of a rather large iceberg. Cross-contamination can occur in homes, restaurants, food manufacturing plants, and on farms. The frequency of cross-contamination as the cause of accidental exposures to allergenic foods is unknown. Food allergic individuals can react to ingestion of trace levels of the offending food, although a highly variable range of threshold doses exist among populations of food allergic individuals. The magnitude of the risk posed to food allergic consumers by cross-contamination is characterized by the frequency of exposure to cross-contaminated foods, the dose of exposure, and the individual's threshold dose. The food and food service industry (and food preparers in homes as well) have the responsibility to provide and prepare foods that are safe for food allergic consumers, but quality of life may be improved with the recognition that safe (though very low) thresholds do exist.
Richardson, James K.; DeMott, Trina; Allet, Lara; Kim; Ashton-Miller, James A.
2014-01-01
Introduction We determined lower limb neuromuscular capacities associated with falls and fall-related injuries in older people with declining peripheral nerve function. Methods Thirty-two subjects (67.4 ± 13.4 years; 19 with type 2 diabetes), representing a spectrum of peripheral neurologic function, were evaluated with frontal plane proprioceptive thresholds at the ankle, frontal plane motor function at the ankle and hip, and prospective follow-up for 1 year. Results Falls and fall-related injuries were reported by 20 (62.5%) and 14 (43.8%) subjects, respectively. The ratio of hip adductor rate of torque development to ankle proprioceptive threshold (HipSTR/AnkPRO) predicted falls (pseudo-R2 = .726) and injury (pseudo-R2 = .382). No other variable maintained significance in the presence of HipSTR/AnkPRO. Discussion Fall and injury risk in the population studied is related inversely to HipSTR/AnkPRO. Increasing rapidly available hip strength in patients with neuropathic ankle sensory impairment may decrease risk of falls and related injuries. PMID:24282041
Development of a landlside EWS based on rainfall thresholds for Tuscany Region, Italy
NASA Astrophysics Data System (ADS)
Rosi, Ascanio; Segoni, Samuele; Battistini, Alessandro; Rossi, Guglielmo; Catani, Filippo; Casagli, Nicola
2017-04-01
We present the set-up of a landslide EWS based on rainfall thresholds for the Tuscany region (central Italy), that shows a heterogeneous distribution of reliefs and precipitation. The work started with the definition of a single set of thresholds for the whole region, but it resulted unsuitable for EWS purposes, because of the heterogeneity of the Tuscan territory and non-repeatability of the analyses, that were affected by a high degree of subjectivity. To overcome this problem, the work started from the implementation of a software capable of objectively defining the rainfall thresholds, since some of the main issues of these thresholds are the subjectivity of the analysis and therefore their non-repeatability. This software, named MaCumBA, is largely automated and can analyze, in a short time, a high number of rainfall events to define several parameters of the threshold, such as the intensity (I) and the duration (D) of the rainfall event, the no-rain time gap (NRG: how many hours without rain are needed to consider two events as separated) and the equation describing the threshold. The possibility of quickly perform several analyses lead to the decision to divide the territory in 25 homogeneous areas (named alert zones, AZ), so as a single threshold for each AZ could be defined. For the definition of the thresholds two independent datasets (of joint rainfall-landslide occurrences) have been used: a calibration dataset (data from 2000 to 2007) and a validation dataset (2008-2009). Once the thresholds were defined, a WebGIS-based EWS has been implemented. In this system it is possible to focus both on monitoring of real-time data and on forecasting at different lead times up to 48 h; forecasting data are collected from LAMI (Limited Area Model Italy) rainfall forecasts. The EWS works on the basis of the threshold parameters defined by MaCumBA (I, D, NRG). An important feature of the warning system is that the visualization of the thresholds in the WebGIS interface may vary in time depending on when the starting time of the rainfall event is set. Therefore, the starting time of the rainfall event is considered as a variable by the system: whenever new rainfall data are available, a recursive algorithm identifies the starting time for which the rainfall path is closest to or overcomes the threshold. This is considered the most hazardous condition, and it is displayed by the WebGIS interface. One more issue that came to surface, after the EWS implementation, was the time-limited validity of the thresholds. On one hand rainfall thresholds can give good results, on the other hand their validity is limited in time, because of several factors, such as changes of pluviometric regime, land use and urban development. Furthermore, the availability of new landslide data can lead to more robust results. For the aforementioned reasons some of the thresholds defined for Tuscany region were updated, by using new landslide data (from 2010 to march 2013). A comparison between updated and former thresholds clearly shows that the performance of an EWS can be enhanced if the thresholds are constantly updated.
Herlitz, Georg N.; Sanders, Renee L.; Cheung, Nora H.; Coyle, Susette M.; Griffel, Benjamin; Macor, Marie A.; Lowry, Stephen F.; Calvano, Steve E.; Gale, Stephen C.
2014-01-01
Introduction Human injury or infection induces systemic inflammation with characteristic neuro-endocrine responses. Fluctuations in autonomic function during inflammation are reflected by beat-to-beat variation in heart rate, termed heart rate variability (HRV). In the present study, we determine threshold doses of endotoxin needed to induce observable changes in markers of systemic inflammation, we investigate whether metrics of HRV exhibit a differing threshold dose from other inflammatory markers, and we investigate the size of data sets required for meaningful use of multi-scale entropy (MSE) analysis of HRV. Methods Healthy human volunteers (n=25) were randomized to receive placebo (normal saline) or endotoxin/lipopolysaccharide (LPS): 0.1, 0.25, 0.5, 1.0, or 2.0 ng/kg administered intravenously. Vital signs were recorded every 30 minutes for 6 hours and then at 9, 12, and 24 hours after LPS. Blood samples were drawn at specific time points for cytokine measurements. HRV analysis was performed using EKG epochs of 5 minutes. MSE for HRV was calculated for all dose groups to scale factor 40. Results The lowest significant threshold dose was noted in core temperature at 0.25ng/kg. Endogenous TNF-α and IL-6 were significantly responsive at the next dosage level (0.5ng/kg) along with elevations in circulating leukocytes and heart rate. Responses were exaggerated at higher doses (1 and 2 ng/kg). Time domain and frequency domain HRV metrics similarly suggested a threshold dose, differing from placebo at 1.0 and 2.0 ng/kg, below which no clear pattern in response was evident. By applying repeated-measures ANOVA across scale factors, a significant decrease in MSE was seen at 1.0 and 2.0 ng/kg by 2 hours post exposure to LPS. While not statistically significant below 1.0 ng/kg, MSE unexpectedly decreased across all groups in an orderly dose-response pattern not seen in the other outcomes. Conclusions By usingrANOVA across scale factors, MSE can detect autonomic change after LPS challenge in a group of 25 subjects using EKG epochs of only 5 minutes and entropy analysis to scale factor of only 40, potentially facilitating MSE’s wider use as a research tool or bedside monitor. Traditional markers of inflammation generally exhibit threshold dose behavior. In contrast, MSE’s apparent continuous dose-response pattern, while not statistically verifiable in this study, suggests a potential subclinical harbinger of infectious or other insult. The possible derangement of autonomic complexity prior to or independent of the cytokine surge cannot be ruled out. Future investigation should focus on confirmation of overt inflammation following observed decreases in MSE in a clinical setting. PMID:25526373
Motor unit firing rate patterns during voluntary muscle force generation: a simulation study
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. Muscle force is generated by a combination of motor unit (MU) recruitment and changes in the discharge rate of active MUs. There have been two basic MU recruitment and firing rate paradigms reported in the literature, which describe the control of the MUs during force generation. The first (termed the reverse ‘onion skin’ profile), exhibits lower firing rates for lower threshold units, with higher firing rates occurring in higher threshold units. The second (termed the ‘onion skin’ profile), exhibits an inverse arrangement, with lower threshold units reaching higher firing rates. Approach. Using a simulation of the MU activity in a hand muscle, this study examined the force generation capacity and the variability of the muscle force magnitude at different excitation levels of the MU pool under these two different MU control paradigms. We sought to determine which rate/recruitment scheme was more efficient for force generation, and which scheme gave rise to the lowest force variability. Main results. We found that the force output of both firing patterns leads to graded force output at low excitation levels, and that the force generation capacity of the two different paradigms diverged around 50% excitation. In the reverse ‘onion skin’ pattern, at 100% excitation, the force output reached up to 88% of maximum force, whereas for the ‘onion skin’ pattern, the force output only reached up to 54% of maximum force at 100% excitation. The force variability was lower at the low to moderate force levels under the ‘onion skin’ paradigm than with the reverse ‘onion skin’ firing patterns, but this effect was reversed at high force levels. Significance. This study captures the influence of MU recruitment and firing rate organization on muscle force properties, and our results suggest that the different firing organizations can be beneficial at different levels of voluntary muscle force generation and perhaps for different tasks.
The Delineation of Coral Bleaching Thresholds and Future Reef Health, Little Cayman Cayman Islands
NASA Astrophysics Data System (ADS)
Manfrino, C.; Van Hooidonk, R. J.; Manzello, D.; Hendee, J.
2011-12-01
The global rise in sea temperature through anthropogenic climate change is affecting coral reef ecosystems through a phenomenon known as coral bleaching; a common reaction to thermally induced physiological stress in reef-building corals that often leads to coral mortality. We describe aspects of the most prevalent episode of coral bleaching ever recorded at Little Cayman, Cayman Islands, during the fall of 2009. Scleractinian coral species exhibiting susceptibility to thermal stress and bleaching in Little Cayman were, in order, Siderastrea siderea, Montastraea annularis, and Montastraea faveolata, while Diplora strigosa and Agaricia spp. were less so, yet still showed considerable bleaching prevalence and severity. In contrast, the least susceptible were Porites porites, Porites astreoides, and Montastraea cavernosa. These observations and other reported observations of coral bleaching, together with 29 years (1982 - 2010) of satellite-derived sea surface temperatures, were used in a Degree Heating Weeks (DHW) and Peirce Skill Score (PSS) analysis to calculate a bleaching threshold above which bleaching was expected to occur. A threshold of 4.2 DHW had the highest skill, with a PSS of 0.70. This threshold and susceptibility ranking are used in combination with SST data from global, coupled ocean-atmosphere general circulation models (GCM) from the fourth IPCC assessment to forecast future reef health on Little Cayman. While these GCMs possess skill in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions. To overcome this, a multi-model ensemble of GCMs are corrected for their mean, annual cycle and ENSO variability prior to calculating future thermal stress. Preliminary results show that from 2045 on Little Cayman is likely to see more than two massive bleaching episodes per decade.
Rautaharju, Pentti M; Zhang, Zhu-ming; Gregg, Richard E; Haisty, Wesley K; Z Vitolins, Mara; Curtis, Anne B; Warren, James; Horaĉek, Milan B; Zhou, Sophia H; Soliman, Elsayed Z
2013-01-01
Substantial new information has emerged recently about the prognostic value for a variety of new ECG variables. The objective of the present study was to establish reference standards for these novel risk predictors in a large, ethnically diverse cohort of healthy women from the Women's Health Initiative (WHI) study. The study population consisted of 36,299 healthy women. Racial differences in rate-adjusted QT end (QT(ea)) and QT peak (QT(pa)) intervals as linear functions of RR were small, leading to the conclusion that 450 and 390 ms are applicable as thresholds for prolonged and shortened QT(ea) and similarly, 365 and 295 ms for prolonged and shortened QT(pa), respectively. As a threshold for increased dispersion of global repolarization (T(peak)T(end) interval), 110 ms was established for white and Hispanic women and 120 ms for African-American and Asian women. ST elevation and depression values for the monitoring leads of each person with limb electrodes at Mason-Likar positions and chest leads at level of V1 and V2 were first computed from standard leads using lead transformation coefficients derived from 892 body surface maps, and subsequently normal standards were determined for the monitoring leads, including vessel-specific bipolar left anterior descending, left circumflex artery and right coronary artery leads. The results support the choice 150 μV as a tentative threshold for abnormal ST-onset elevation for all monitoring leads. Body mass index (BMI) had a profound effect on Cornell voltage and Sokolow-Lyon voltage in all racial groups and their utility for left ventricular hypertrophy classification remains open. Common thresholds for all racial groups are applicable for QT(ea), and QT(pa) intervals and ST elevation. Race-specific normal standards are required for many other ECG parameters. Copyright © 2013 Elsevier Inc. All rights reserved.
Evaluating Temporal Consistency in Marine Biodiversity Hotspots.
Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S
2015-01-01
With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other environmental perturbations, our work highlights the need for scientists and conservation managers to consider both spatial and temporal dynamics when designating biodiversity hotspots.
Evaluating Temporal Consistency in Marine Biodiversity Hotspots
Barner, Allison K.; Benkwitt, Cassandra E.; Boersma, Kate S.; Cerny-Chipman, Elizabeth B.; Ingeman, Kurt E.; Kindinger, Tye L.; Lindsley, Amy J.; Nelson, Jake; Reimer, Jessica N.; Rowe, Jennifer C.; Shen, Chenchen; Thompson, Kevin A.; Heppell, Selina S.
2015-01-01
With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon’s diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other environmental perturbations, our work highlights the need for scientists and conservation managers to consider both spatial and temporal dynamics when designating biodiversity hotspots. PMID:26200354
NASA Astrophysics Data System (ADS)
Garcia Galiano, S. G.; Giraldo Osorio, J. D.; Nguyen, P.; Hsu, K. L.; Braithwaite, D.; Olmos, P.; Sorooshian, S.
2015-12-01
Studying Spain's long-term variability and changing trends in rainfall, due to its unique position in the Mediterranean basin (i.e., the latitudinal gradient from North to South and its orographic variation), can provide a valuable insight into how hydroclimatology of the region has changed. A recently released high resolution satellite-based global daily precipitation climate dataset PERSIANN-CDR (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network - Climate Data Record), provided the opportunity to conduct such study. It covers the period 01/01/1983 - to date, at 0.25° resolution. In areas without a dense network of rain-gauges, the PERSIANN-CDR dataset could be useful for identifying the reliability of regional climate models (RCMs), in order to build robust RCMs ensemble for reducing the uncertainties in the climate and hydrological projections. However, before using this data set for RCM evaluation, an assessment of performance of PERSIANN-CDR dataset against in-situ observations is necessary. The high-resolution gridded daily rain-gauge dataset, named Spain02, was employed in this study. The variable Dry Spell Lengths (DSL) considering 1 mm and 10 mm as thresholds of daily rainfall, and the time period 1988-2007 was defined for the study. A procedure for improving the consistency and homogeneity between the two datasets was applied. The assessment is based on distributional similarity and the well-known statistical tests (Smirnov-Kolmogorov of two samples and Chi-Square) are used as fitting criteria. The results demonstrate good fit of PERSIANN-CDR over whole Spain, for threshold 10 mm/day. However, for threshold 1 mm/day PERSIANN-CDR compares well with Spain02 dataset for areas with high values of rainfall (North of Spain); while in semiarid areas (South East of Spain) there is strong overestimation of short DSLs. Overall, PERSIANN-CDR demonstrate its robustness in the simulation of DSLs for the highest thresholds.
Variation in hearing within a wild population of beluga whales (Delphinapterus leucas).
Mooney, T Aran; Castellote, Manuel; Quakenbush, Lori; Hobbs, Roderick; Gaglione, Eric; Goertz, Caroline
2018-05-08
Documenting hearing abilities is vital to understanding a species' acoustic ecology and for predicting the impacts of increasing anthropogenic noise. Cetaceans use sound for essential biological functions such as foraging, navigation and communication; hearing is considered to be their primary sensory modality. Yet, we know little regarding the hearing of most, if not all, cetacean populations, which limits our understanding of their sensory ecology, population level variability and the potential impacts of increasing anthropogenic noise. We obtained audiograms (5.6-150 kHz) of 26 wild beluga whales to measure hearing thresholds during capture-release events in Bristol Bay, AK, USA, using auditory evoked potential methods. The goal was to establish the baseline population audiogram, incidences of hearing loss and general variability in wild beluga whales. In general, belugas showed sensitive hearing with low thresholds (<80 dB) from 16 to 100 kHz, and most individuals (76%) responded to at least 120 kHz. Despite belugas often showing sensitive hearing, thresholds were usually above or approached the low ambient noise levels measured in the area, suggesting that a quiet environment may be associated with hearing sensitivity and that hearing thresholds in the most sensitive animals may have been masked. Although this is just one wild population, the success of the method suggests that it should be applied to other populations and species to better assess potential differences. Bristol Bay beluga audiograms showed substantial (30-70 dB) variation among individuals; this variation increased at higher frequencies. Differences among individual belugas reflect that testing multiple individuals of a population is necessary to best describe maximum sensitivity and population variance. The results of this study quadruple the number of individual beluga whales for which audiograms have been conducted and provide the first auditory data for a population of healthy wild odontocetes. © 2018. Published by The Company of Biologists Ltd.
Variable Mixed Orbital Character in the Photoelectron Angular Distribution of NO_{2}
NASA Astrophysics Data System (ADS)
Laws, Benjamin A.; Cavanagh, Steven J.; Lewis, Brenton R.; Gibson, Stephen T.
2017-06-01
NO_{2} a key component of photochemical smog and an important species in the Earth's atmosphere, is an example of a molecule which exhibits significant mixed orbital character in the HOMO. In photoelectron experiments the geometric properties of the parent anion orbital are reflected in the photoelectron angular distribution (PAD), an area of research that has benefited largely from the ability of velocity-map imaging (VMI) to simultaneously record both the energetic and angular information, with 100% collection efficiency. Photoelectron spectra of NO_{2}^{-}, taken over a range of wavelengths (355nm-520nm) with the ANU's VMI spectrometer, reveal an anomalous jump in the anisotropy parameter near threshold. Consequently, the orbital behavior of NO_{2}^{-} appears to be quite different near threshold compared to detachment at higher photon energies. This surprising effect is due to the Wigner Threshold law, which causes p orbital character to dominate the photodetachment cross-section near threshold, before the mixed s/d orbital character becomes significant at higher electron kinetic energies. By extending recent work on binary character models to form a more general expression, the variable mixed orbital character of NO_{2}^{-} is able to be described. This study provides the first multi-wavelength NO_{2} anisotropy data, which is shown to be in decent agreement with much earlier zero-core model predictions of the anisotropy parameter. K. J. Reed, A. H. Zimmerman, H. C. Andersen, and J. I. Brauman, J. Chem. Phys. 64, 1368, (1976). doi:10.1063/1.432404 D. Khuseynov, C. C. Blackstone, L. M. Culberson, and A. Sanov, J. Chem. Phys. 141, 124312, (2014). doi:10.1063/1.4896241 W. B. Clodius, R. M. Stehman, and S. B. Woo, Phys. Rev. A. 28, 760, (1983). doi:10.1103/PhysRevA.28.760 Research supported by the Australian Research Council Discovery Project Grant DP160102585
Simmons, J M; Ackermann, R F; Gallistel, C R
1998-10-15
Lesions in the medial forebrain bundle rostral to a stimulating electrode have variable effects on the rewarding efficacy of self-stimulation. We attempted to account for this variability by measuring the anatomical and functional effects of electrolytic lesions at the level of the lateral hypothalamus (LH) and by correlating these effects to postlesion changes in threshold pulse frequency (pps) for self-stimulation in the ventral tegmental area (VTA). We implanted True Blue in the VTA and compared cell labeling patterns in forebrain regions of intact and lesioned animals. We also compared stimulation-induced regional [14C]deoxyglucose (DG) accumulation patterns in the forebrains of intact and lesioned animals. As expected, postlesion threshold shifts varied: threshold pps remained the same or decreased in eight animals, increased by small but significant amounts in three rats, and increased substantially in six subjects. Unexpectedly, LH lesions did not anatomically or functionally disconnect all forebrain nuclei from the VTA. Most septal and preoptic regions contained equivalent levels of True Blue label in intact and lesioned animals. In both intact and lesioned groups, VTA stimulation increased metabolic activity in the fundus of the striatum (FS), the nucleus of the diagonal band, and the medial preoptic area. On the other hand, True Blue labeling demonstrated anatomical disconnection of the accumbens, FS, substantia innominata/magnocellular preoptic nucleus (SI/MA), and bed nucleus of the stria terminalis. [14C]DG autoradiography indicated functional disconnection of the lateral preoptic area and SI/MA. Correlations between patterns of True Blue labeling or [14C]deoxyglucose accumulation and postlesion shifts in threshold pulse frequency were weak and generally negative. These direct measures of connectivity concord with the behavioral measures in suggesting a diffuse net-like connection between forebrain nuclei and the VTA.
Crop responses to climatic variation
Porter, John R; Semenov, Mikhail A
2005-01-01
The yield and quality of food crops is central to the well being of humans and is directly affected by climate and weather. Initial studies of climate change on crops focussed on effects of increased carbon dioxide (CO2) level and/or global mean temperature and/or rainfall and nutrition on crop production. However, crops can respond nonlinearly to changes in their growing conditions, exhibit threshold responses and are subject to combinations of stress factors that affect their growth, development and yield. Thus, climate variability and changes in the frequency of extreme events are important for yield, its stability and quality. In this context, threshold temperatures for crop processes are found not to differ greatly for different crops and are important to define for the major food crops, to assist climate modellers predict the occurrence of crop critical temperatures and their temporal resolution. This paper demonstrates the impacts of climate variability for crop production in a number of crops. Increasing temperature and precipitation variability increases the risks to yield, as shown via computer simulation and experimental studies. The issue of food quality has not been given sufficient importance when assessing the impact of climate change for food and this is addressed. Using simulation models of wheat, the concentration of grain protein is shown to respond to changes in the mean and variability of temperature and precipitation events. The paper concludes with discussion of adaptation possibilities for crops in response to drought and argues that characters that enable better exploration of the soil and slower leaf canopy expansion could lead to crop higher transpiration efficiency. PMID:16433091
Aslan-Sungur, Guler; Lee, Xuhui; Evrendilek, Fatih; Karakaya, Nusret
2016-06-01
Peatland ecosystems play an important role in the global carbon (C) cycle as significant C sinks. However, human-induced disturbances can turn these sinks into sources of atmospheric CO2. Long-term measurements are needed to understand seasonal and interannual variability of net ecosystem CO2 exchange (NEE) and effects of hydrological conditions and their disturbances on C fluxes. Continuous eddy-covariance measurements of NEE were conducted between August 2010 and April 2014 at Yenicaga temperate peatland (Turkey), which was drained for agricultural usage and for peat mining until 2009. Annual NEE during the three full years of measurement indicated that the peatland acted as a CO2 source with large interannual variability, at rates of 246, 244 and 663 g Cm(-2)yr(-1) for 2011, 2012, and 2013 respectively, except for June 2011, and May to July 2012. The emission strengths were comparable to those found for severely disturbed tropical peatlands. The peak CO2 emissions occurred in the dry summer of 2013 when water table level (WTL) was below a threshold value of -60 cm and soil water content (SCW) below a threshold value of 70% by volume. Water availability index was found to have a stronger explanatory power for variations in monthly ecosystem respiration (ER) than the traditional water status indicators (SCW and WTL). Air temperature, evapotranspiration and vapor pressure deficient were the most significant variables strongly correlated with NEE and its component fluxes of gross primary production and ER. Copyright © 2016 Elsevier B.V. All rights reserved.
I feel good! Gender differences and reporting heterogeneity in self-assessed health.
Schneider, Udo; Pfarr, Christian; Schneider, Brit S; Ulrich, Volker
2012-06-01
For empirical analysis and policy-oriented recommendations, the precise measurement of individual health or well-being is essential. The difficulty is that the answer may depend on individual reporting behaviour. Moreover, if an individual's health perception varies with certain attitudes of the respondent, reporting heterogeneity may lead to index or cut-point shifts of the health distribution, causing estimation problems. An index shift is a parallel shift in the thresholds of the underlying distribution of health categories. In contrast, a cut-point shift means that the relative position of the thresholds changes, implying different response behaviour. Our paper aims to detect how socioeconomic determinants and health experiences influence the individual valuation of health. We analyse the reporting behaviour of individuals on their self-assessed health status, a five-point categorical variable. Using German panel data, we control for observed heterogeneity in the categorical health variable as well as unobserved individual heterogeneity in the panel estimation. In the empirical analysis, we find strong evidence for cut-point shifts. Our estimation results show different impacts of socioeconomic and health-related variables on the five categories of self-assessed health. Moreover, the answering behaviour varies between female and male respondents, pointing to gender-specific perception and assessment of health. Hence, in case of reporting heterogeneity, using self-assessed measures in empirical studies may be misleading and the information needs to be handled with care.
On the Design of a Fuzzy Logic-Based Control System for Freeze-Drying Processes.
Fissore, Davide
2016-12-01
This article is focused on the design of a fuzzy logic-based control system to optimize a drug freeze-drying process. The goal of the system is to keep product temperature as close as possible to the threshold value of the formulation being processed, without trespassing it, in such a way that product quality is not jeopardized and the sublimation flux is maximized. The method involves the measurement of product temperature and a set of rules that have been obtained through process simulation with the goal to obtain a unique set of rules for products with very different characteristics. Input variables are the difference between the temperature of the product and the threshold value, the difference between the temperature of the heating fluid and that of the product, and the rate of change of product temperature. The output variables are the variation of the temperature of the heating fluid and the pressure in the drying chamber. The effect of the starting value of the input variables and of the control interval has been investigated, thus resulting in the optimal configuration of the control system. Experimental investigation carried out in a pilot-scale freeze-dryer has been carried out to validate the proposed system. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Y; Aileen, C; Kozono, D
Purpose: Quantification of volume changes on CBCT during SBRT for NSCLC may provide a useful radiological marker for radiation response and adaptive treatment planning, but the reproducibility of CBCT volume delineation is a concern. This study is to quantify inter-scan/inter-observer variability in tumor volume delineation on CBCT. Methods: Twenty earlystage (stage I and II) NSCLC patients were included in this analysis. All patients were treated with SBRT with a median dose of 54 Gy in 3 to 5 fractions. Two physicians independently manually contoured the primary gross tumor volume on CBCTs taken immediately before SBRT treatment (Pre) and after themore » same SBRT treatment (Post). Absolute volume differences (AVD) were calculated between the Pre and Post CBCTs for a given treatment to quantify inter-scan variability, and then between the two observers for a given CBCT to quantify inter-observer variability. AVD was also normalized with respect to average volume to obtain relative volume differences (RVD). Bland-Altman approach was used to evaluate variability. All statistics were calculated with SAS version 9.4. Results: The 95% limit of agreement (mean ± 2SD) on AVD and RVD measurements between Pre and Post scans were −0.32cc to 0.32cc and −0.5% to 0.5% versus −1.9 cc to 1.8 cc and −15.9% to 15.3% for the two observers respectively. The 95% limit of agreement of AVD and RVD between the two observers were −3.3 cc to 2.3 cc and −42.4% to 28.2% respectively. The greatest variability in inter-scan RVD was observed with very small tumors (< 5 cc). Conclusion: Inter-scan variability in RVD is greatest with small tumors. Inter-observer variability was larger than inter-scan variability. The 95% limit of agreement for inter-observer and inter-scan variability (∼15–30%) helps define a threshold for clinically meaningful change in tumor volume to assess SBRT response, with larger thresholds needed for very small tumors. Part of the work was funded by a Kaye award; Disclosure/Conflict of interest: Raymond H. Mak: Stock ownership: Celgene, Inc. Consulting: Boehringer-Ingelheim, Inc.« less
Fatness and fitness: exposing the logic of evolutionary explanations for obesity.
Higginson, Andrew D; McNamara, John M; Houston, Alasdair I
2016-01-13
To explore the logic of evolutionary explanations of obesity we modelled food consumption in an animal that minimizes mortality (starvation plus predation) by switching between activities that differ in energy gain and predation. We show that if switching does not incur extra predation risk, the animal should have a single threshold level of reserves above which it performs the safe activity and below which it performs the dangerous activity. The value of the threshold is determined by the environmental conditions, implying that animals should have variable 'set points'. Selection pressure to prevent energy stores exceeding the optimal level is usually weak, suggesting that immediate rewards might easily overcome the controls against becoming overweight. The risk of starvation can have a strong influence on the strategy even when starvation is extremely uncommon, so the incidence of mortality during famine in human history may be unimportant for explanations for obesity. If there is an extra risk of switching between activities, the animal should have two distinct thresholds: one to initiate weight gain and one to initiate weight loss. Contrary to the dual intervention point model, these thresholds will be inter-dependent, such that altering the predation risk alters the location of both thresholds; a result that undermines the evolutionary basis of the drifty genes hypothesis. Our work implies that understanding the causes of obesity can benefit from a better understanding of how evolution shapes the mechanisms that control body weight. © 2016 The Authors.
Fatness and fitness: exposing the logic of evolutionary explanations for obesity
Higginson, Andrew D.; McNamara, John M.; Houston, Alasdair I.
2016-01-01
To explore the logic of evolutionary explanations of obesity we modelled food consumption in an animal that minimizes mortality (starvation plus predation) by switching between activities that differ in energy gain and predation. We show that if switching does not incur extra predation risk, the animal should have a single threshold level of reserves above which it performs the safe activity and below which it performs the dangerous activity. The value of the threshold is determined by the environmental conditions, implying that animals should have variable ‘set points’. Selection pressure to prevent energy stores exceeding the optimal level is usually weak, suggesting that immediate rewards might easily overcome the controls against becoming overweight. The risk of starvation can have a strong influence on the strategy even when starvation is extremely uncommon, so the incidence of mortality during famine in human history may be unimportant for explanations for obesity. If there is an extra risk of switching between activities, the animal should have two distinct thresholds: one to initiate weight gain and one to initiate weight loss. Contrary to the dual intervention point model, these thresholds will be inter-dependent, such that altering the predation risk alters the location of both thresholds; a result that undermines the evolutionary basis of the drifty genes hypothesis. Our work implies that understanding the causes of obesity can benefit from a better understanding of how evolution shapes the mechanisms that control body weight. PMID:26740612
Vickerman, Peter; Martin, Natasha K; Hickman, Matthew
2012-06-01
A recent systematic review observed that HIV prevalence amongst injectors is negligible (<1%) below a threshold HCV prevalence of 30%, but thereafter increases with HCV prevalence. We explore whether a model can reproduce these trends, what determines different epidemiological profiles and how this affects intervention impact. An HIV/HCV transmission model was developed. Univariate sensitivity analyses determined whether the model projected a HCV prevalence threshold below which HIV is negligible, and how different behavioural and epidemiological factors affect the threshold. Multivariate uncertainty analyses considered whether the model could reproduce the observed breadth of HIV/HCV epidemics, how specific behavioural patterns produce different epidemic profiles, and how this affects an intervention's impact (reduces injecting risk by 30%). The model projected a HCV prevalence threshold, which varied depending on the heterogeneity in risk, mixing, and injecting duration in a setting. Multivariate uncertainty analyses showed the model could produce the same range of observed HIV/HCV epidemics. Variability in injecting transmission risk, degree of heterogeneity and injecting duration mainly determined different epidemic profiles. The intervention resulted in 50%/28% reduction in HIV incidence/prevalence and 37%/10% reduction in HCV incidence/prevalence over five years. For either infection, greater impact occurred in settings with lower prevalence of that infection and higher prevalence of the other infection. There are threshold levels of HCV prevalence below which HIV risk is negligible but these thresholds are likely to vary by setting. A setting's HIV and HCV prevalence may give insights into IDU risk behaviour and intervention impact. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Rainfall control of debris-flow triggering in the Réal Torrent, Southern French Prealps
NASA Astrophysics Data System (ADS)
Bel, Coraline; Liébault, Frédéric; Navratil, Oldrich; Eckert, Nicolas; Bellot, Hervé; Fontaine, Firmin; Laigle, Dominique
2017-08-01
This paper investigates the occurrence of debris flow due to rainfall forcing in the Réal Torrent, a very active debris flow-prone catchment in the Southern French Prealps. The study is supported by a 4-year record of flow responses and rainfall events, from three high-frequency monitoring stations equipped with geophones, flow stage sensors, digital cameras, and rain gauges measuring rainfall at 5-min intervals. The classic method of rainfall intensity-duration (ID) threshold was used, and a specific emphasis was placed on the objective identification of rainfall events, as well as on the discrimination of flow responses observed above the ID threshold. The results show that parameters used to identify rainfall events significantly affect the ID threshold and are likely to explain part of the threshold variability reported in the literature. This is especially the case regarding the minimum duration of rain interruption (MDRI) between two distinct rainfall events. In the Réal Torrent, a 3-h MDRI appears to be representative of the local rainfall regime. A systematic increase in the ID threshold with drainage area was also observed from the comparison of the three stations, as well as from the compilation of data from experimental debris-flow catchments. A logistic regression used to separate flow responses above the ID threshold, revealed that the best predictors are the 5-min maximum rainfall intensity, the 48-h antecedent rainfall, the rainfall amount and the number of days elapsed since the end of winter (used as a proxy of sediment supply). This emphasizes the critical role played by short intense rainfall sequences that are only detectable using high time-resolution rainfall records. It also highlights the significant influence of antecedent conditions and the seasonal fluctuations of sediment supply.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
Monitoring Start of Season in Alaska
NASA Astrophysics Data System (ADS)
Robin, J.; Dubayah, R.; Sparrow, E.; Levine, E.
2006-12-01
In biomes that have distinct winter seasons, start of spring phenological events, specifically timing of budburst and green-up of leaves, coincides with transpiration. Seasons leave annual signatures that reflect the dynamic nature of the hydrologic cycle and link the different spheres of the Earth system. This paper evaluates whether continuity between AVHRR and MODIS normalized difference vegetation index (NDVI) is achievable for monitoring land surface phenology, specifically start of season (SOS), in Alaska. Additionally, two thresholds, one based on NDVI and the other on accumulated growing degree-days (GDD), are compared to determine which most accurately predicts SOS for Fairbanks. Ratio of maximum greenness at SOS was computed from biweekly AVHRR and MODIS composites for 2001 through 2004 for Anchorage and Fairbanks regions. SOS dates were determined from annual green-up observations made by GLOBE students. Results showed that different processing as well as spectral characteristics of each sensor restrict continuity between the two datasets. MODIS values were consistently higher and had less inter-annual variability during the height of the growing season than corresponding AVHRR values. Furthermore, a threshold of 131-175 accumulated GDD was a better predictor of SOS for Fairbanks than a NDVI threshold applied to AVHRR and MODIS datasets. The NDVI threshold was developed from biweekly AVHRR composites from 1982 through 2004 and corresponding annual green-up observations at University of Alaska-Fairbanks (UAF). The GDD threshold was developed from 20+ years of historic daily mean air temperature data and the same green-up observations. SOS dates computed with the GDD threshold most closely resembled actual green-up dates observed by GLOBE students and UAF researchers. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
Mechanisms of breathing instability in patients with obstructive sleep apnea.
Younes, Magdy; Ostrowski, Michele; Atkar, Raj; Laprairie, John; Siemens, Andrea; Hanly, Patrick
2007-12-01
The response to chemical stimuli (chemical responsiveness) and the increases in respiratory drive required for arousal (arousal threshold) and for opening the airway without arousal (effective recruitment threshold) are important determinants of ventilatory instability and, hence, severity of obstructive apnea. We measured these variables in 21 obstructive apnea patients (apnea-hypopnea index 91 +/- 24 h(-1)) while on continuous-positive-airway pressure. During sleep, pressure was intermittently reduced (dial down) to induce severe hypopneas. Dial downs were done on room air and following approximately 30 s of breathing hypercapneic and/or hypoxic mixtures, which induced a range of ventilatory stimulation before dial down. Ventilation just before dial down and flow during dial down were measured. Chemical responsiveness, estimated as the percent increase in ventilation during the 5(th) breath following administration of 6% CO(2) combined with approximately 4% desaturation, was large (187 +/- 117%). Arousal threshold, estimated as the percent increase in ventilation associated with a 50% probability of arousal, ranged from 40% to >268% and was <120% in 12/21 patients, indicating that in many patients arousal occurs with modest changes in chemical drive. Effective recruitment threshold, estimated as percent increase in pre-dial-down ventilation associated with a significant increase in dial-down flow, ranged from zero to >174% and was <110% in 12/21 patients, indicating that in many patients reflex dilatation occurs with modest increases in drive. The two thresholds were not correlated. In most OSA patients, airway patency may be maintained with only modest increases in chemical drive, but instability results because of a low arousal threshold and a brisk increase in drive following brief reduction in alveolar ventilation.
What Makes a Caseload (Un) Manageable? School-Based Speech-Language Pathologists Speak
ERIC Educational Resources Information Center
Katz, Lauren A.; Maag, Abby; Fallon, Karen A.; Blenkarn, Katie; Smith, Megan K.
2010-01-01
Purpose: Large caseload sizes and a shortage of speech-language pathologists (SLPs) are ongoing concerns in the field of speech and language. This study was conducted to identify current mean caseload size for school-based SLPs, a threshold at which caseload size begins to be perceived as unmanageable, and variables contributing to school-based…
ERIC Educational Resources Information Center
Shobo, Yetty; Wong, Jen D.; Bell, Angie
2014-01-01
Regression discontinuity (RD), an "as good as randomized," research design is increasingly prominent in education research in recent years; the design gets eligible quasi-experimental designs as close as possible to experimental designs by using a stated threshold on a continuous baseline variable to assign individuals to a…
Current research issues related to post-wildfire runoff and erosion processes
John A. Moody; Richard A. Shakesby; Peter R. Robichaud; Susan H. Cannon; Deborah A. Martin
2013-01-01
Research into post-wildfire effects began in the United Statesmore than 70 years ago and only later extended to other parts of the world. Post-wildfire responses are typically transient, episodic, variable in space and time, dependent on thresholds, and involve multiple processes measured by different methods. These characteristics tend to hinder research progress, but...
ERIC Educational Resources Information Center
Eaton, Karen M.; Messer, Stephen C.; Garvey Wilson, Abigail L.; Hoge, Charles W.
2006-01-01
The objectives of this study were to generate precise estimates of suicide rates in the military while controlling for factors contributing to rate variability such as demographic differences and classification bias, and to develop a simple methodology for the determination of statistically derived thresholds for detecting significant rate…
Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes
ERIC Educational Resources Information Center
Mehta, Paras D.; Neale, Michael C.; Flay, Brian R.
2004-01-01
A didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the…
Estimating daily climatologies for climate indices derived from climate model data and observations
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof
2015-01-01
Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192
NASA Astrophysics Data System (ADS)
Weiss, S. B.; Bunn, A. G.; Tran, T. J.; Bruening, J. M.; Salzer, M. W.; Hughes, M. K.
2016-12-01
The interpretation of ring-width patterns in high elevation Great Basin bristlecone pine is hampered by the presence of sharp ecophysiological gradients that can lead to mixed growth signals depending on topographic setting of individual trees. We have identified a temperature threshold near the upper forest border above which trees are limited more strongly by temperature, and below which trees tend to be moisture limited. We combined temperature loggers and GIS modeling at a scale of tens of meters to examine trees with different limiting factors. We found that the dual-signal patterns in radial growth can be partially explained by the topoclimate setting of individual trees, with trees in locations where growing season mean temperatures below about 7.4°C to 8°C were more strongly associated with temperature variability than with moisture availability. Using this threshold we show that it is possible to build both temperature and drought reconstructions over the common era from bristlecone pine near the alpine treeline. While our findings might allow for a better physiological understanding of bristlecone pine growth, they also raise questions about the interpretation of temperature reconstructions given the threshold nature of the growth response and the dynamic nature of the treeline ecotone over past millennia.
Feldthusen, Caroline; Grimby-Ekman, Anna; Forsblad-d'Elia, Helena; Jacobsson, Lennart; Mannerkorpi, Kaisa
2016-04-28
To investigate the impact of disease-related aspects on long-term variations in fatigue in persons with rheumatoid arthritis. Observational longitudinal study. Sixty-five persons with rheumatoid arthritis, age range 20-65 years, were invited to a clinical examination at 4 time-points during the 4 seasons. Outcome measures were: general fatigue rated on visual analogue scale (0-100) and aspects of fatigue assessed by the Bristol Rheumatoid Arthritis Fatigue Multidimensional Questionnaire. Disease-related variables were: disease activity (erythrocyte sedimentation rate), pain threshold (pressure algometer), physical capacity (six-minute walk test), pain (visual analogue scale (0-100)), depressive mood (Hospital Anxiety and Depression scale, depression subscale), personal factors (age, sex, body mass index) and season. Multivariable regression analysis, linear mixed effects models were applied. The strongest explanatory factors for all fatigue outcomes, when recorded at the same time-point as fatigue, were pain threshold and depressive mood. Self-reported pain was an explanatory factor for physical aspects of fatigue and body mass index contributed to explaining the consequences of fatigue on everyday living. For predicting later fatigue pain threshold and depressive mood were the strongest predictors. Pain threshold and depressive mood were the most important factors for fatigue in persons with rheumatoid arthritis.
Gas composition sensing using carbon nanotube arrays
NASA Technical Reports Server (NTRS)
Li, Jing (Inventor); Meyyappan, Meyya (Inventor)
2008-01-01
A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.
Kohli, Preeti; Storck, Kristina A.; Schlosser, Rodney J.
2016-01-01
Differences in testing modalities and cut-points used to define olfactory dysfunction contribute to the wide variability in estimating the prevalence of olfactory dysfunction in chronic rhinosinusitis (CRS). The aim of this study is to report the prevalence of olfactory impairment using each component of the Sniffin’ Sticks test (threshold, discrimination, identification, and total score) with age-adjusted and ideal cut-points from normative populations. Patients meeting diagnostic criteria for CRS were enrolled from rhinology clinics at a tertiary academic center. Olfaction was assessed using the Sniffin’ Sticks test. The study population consisted of 110 patients. The prevalence of normosmia, hyposmia, and anosmia using total Sniffin’ Sticks score was 41.8%, 20.0%, and 38.2% using age-appropriate cut-points and 20.9%, 40.9%, and 38.2% using ideal cut-points. Olfactory impairment estimates for each dimension mirrored these findings, with threshold yielding the highest values. Threshold, discrimination, and identification were also found to be significantly correlated to each other (P < 0.001). In addition, computed tomography scores, asthma, allergy, and diabetes were found to be associated with olfactory dysfunction. In conclusion, the prevalence of olfactory dysfunction is dependent upon olfactory dimension and if age-adjusted cut-points are used. The method of olfactory testing should be chosen based upon specific clinical and research goals. PMID:27469973
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.
The correlation dimension: a useful objective measure of the transient visual evoked potential?
Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J
2008-01-14
Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Relativistic (2,3)-threshold quantum secret sharing
NASA Astrophysics Data System (ADS)
Ahmadi, Mehdi; Wu, Ya-Dong; Sanders, Barry C.
2017-09-01
In quantum secret sharing protocols, the usual presumption is that the distribution of quantum shares and players' collaboration are both performed inertially. Here we develop a quantum secret sharing protocol that relaxes these assumptions wherein we consider the effects due to the accelerating motion of the shares. Specifically, we solve the (2,3)-threshold continuous-variable quantum secret sharing in noninertial frames. To this aim, we formulate the effect of relativistic motion on the quantum field inside a cavity as a bosonic quantum Gaussian channel. We investigate how the fidelity of quantum secret sharing is affected by nonuniform motion of the quantum shares. Furthermore, we fully characterize the canonical form of the Gaussian channel, which can be utilized in quantum-information-processing protocols to include relativistic effects.
Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold
NASA Astrophysics Data System (ADS)
Anosov, O. L.; Butkovskii, O. Ya.; Kadtke, J.; Kravtsov, Yu. A.; Protopopescu, V.
1997-05-01
We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This method could replace costly and invasive traditional methods such as gas analysis and blood tests.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Lucente, Giuseppe; Lam, Steven; Schneider, Heike; Picht, Thomas
2018-02-01
Non-invasive pre-surgical mapping of eloquent brain areas with navigated transcranial magnetic stimulation (nTMS) is a useful technique linked to the improvement of surgical planning and patient outcomes. The stimulator output intensity and subsequent resting motor threshold determination (rMT) are based on the motor-evoked potential (MEP) elicited in the target muscle with an amplitude above a predetermined threshold of 50 μV. However, a subset of patients is unable to achieve complete relaxation in the target muscles, resulting in false positives that jeopardize mapping validity with conventional MEP determination protocols. Our aim is to explore the feasibility and reproducibility of a novel mapping approach that investigates how an increase of the MEP amplitude threshold to 300 and 500 μV affects subsequent motor maps. Seven healthy subjects underwent motor mapping with nTMS. RMT was calculated with the conventional methodology in conjunction with experimental 300- and 500-μV MEP amplitude thresholds. Motor mapping was performed with 105% of rMT stimulator intensity using the FDI as the target muscle. Motor mapping was possible in all patients with both the conventional and experimental setups. Motor area maps with a conventional 50-μV threshold showed poor correlation with 300-μV (α = 0.446, p < 0.001) maps, but showed excellent consistency with 500-μV motor area maps (α = 0.974, p < 0.001). MEP latencies were significantly less variable (23 ms for 50 μV vs. 23.7 ms for 300 μV vs. 23.7 ms for 500 μV, p < 0.001). A slight but significant increase of the electric field (EF) value was found (EF: 60.8 V/m vs. 64.8 V/m vs. 66 V/m p < 0.001). Our study demonstrates the feasibility of increasing the MEP detection threshold to 500 μV in rMT determination and motor area mapping with nTMS without losing precision.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
NASA Astrophysics Data System (ADS)
Abancó, Clàudia; Hürlimann, Marcel; Moya, José; Berenguer, Marc
2016-10-01
Torrential flows like debris flows or debris floods are fast movements formed by a mix of water and different amounts of unsorted solid material. They generally occur in steep torrents and pose high risk in mountainous areas. Rainfall is their most common triggering factor and the analysis of the critical rainfall conditions is a fundamental research task. Due to their wide use in warning systems, rainfall thresholds for the triggering of torrential flows are an important outcome of such analysis and are empirically derived using data from past events. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, rainfall data of 25 torrential flows (;TRIG rainfalls;) were recorded, with a 5-min sampling frequency. Other 142 rainfalls that did not trigger torrential flows (;NonTRIG rainfalls;) were also collected and analyzed. The goal of this work was threefold: (i) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential flows and others that did not; (ii) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment by with different techniques; (iii) analyze how the criterion used for defining the rainfall duration and the spatial variability of rainfall influences the value obtained for the thresholds. The statistical analysis of the rainfall characteristics showed that the parameters that discriminate better the TRIG and NonTRIG rainfalls are the rainfall intensities, the mean rainfall and the total rainfall amount. The antecedent rainfall was not significantly different between TRIG and NonTRIG rainfalls, as it can be expected when the source material is very pervious (a sandy glacial soil in the study site). Thresholds were derived from data collected at one rain gauge located inside the catchment. Two different methods were applied to calculate the duration and intensity of rainfall: (i) using total duration, Dtot, and mean intensity, Imean, of the rainfall event, and (ii) using floating durations, D, and intensities, Ifl, based on the maximum values over floating periods of different duration. The resulting thresholds are considerably different (Imean = 6.20 Dtot-0.36 and Ifl_90% = 5.49 D-0.75, respectively) showing a strong dependence on the applied methodology. On the other hand, the definition of the thresholds is affected by several types of uncertainties. Data from both rain gauges and weather radar were used to analyze the uncertainty associated with the spatial variability of the triggering rainfalls. The analysis indicates that the precipitation recorded by the nearby rain gauges can introduce major uncertainties, especially for convective summer storms. Thus, incorporating radar rainfall can significantly improve the accuracy of the measured triggering rainfall. Finally, thresholds were also derived according to three different criteria for the definition of the duration of the triggering rainfall: (i) the duration until the peak intensity, (ii) the duration until the end of the rainfall; and, (iii) the duration until the trigger of the torrential flow. An important contribution of this work is the assessment of the threshold relationships obtained using the third definition of duration. Moreover, important differences are observed in the obtained thresholds, showing that ID relationships are significantly dependent on the applied methodology.
Shah, Zahir; Hu, Man L; Qiu, Zheng Y; Zhou, Fei Y; Zeng, Jie; Wan, Juan; Wang, Shao W; Zhang, Wei; Ding, Ming X
2016-03-01
To investigate physiologic and biochemical effects of electroacupuncture and dexmedetomidine administration to goats. 30 healthy adult goats. Goats were allotted to 5 groups (6 goats/group) and received electroacupuncture, dexmedetomidine (5 or 20 μg/kg, IM), electroacupuncture plus dexmedetomidine (5 μg/kg, IM), or saline (0.9% NaCl) solution (IM [control treatment]). Pain threshold, cardiorespiratory effects, rectal temperature, and hematologic and biochemical variables were assessed. Dexmedetomidine (20 μg/kg) increased pain threshold and decreased heart rate, respiratory rate, and rectal temperature. Pain threshold of goats receiving electroacupuncture plus dexmedetomidine (5 μg/kg) was higher than that of goats receiving electroacupuncture or of goats receiving dexmedetomidine at 5 μg/kg at 30 minutes, but did not differ from that of goats receiving dexmedetomidine at 20 μg/kg. Compared with goats administered dexmedetomidine at 20 μg/kg, goats receiving electroacupuncture plus dexmedetomidine at 5 μg/kg had a higher heart rate from 30 to 60 minutes and a higher respiratory rate from 5 to 60 minutes. Electroacupuncture plus dexmedetomidine (5 μg/kg) did not affect rectal temperature. Serum glucose concentrations of goats receiving electroacupuncture plus dexmedetomidine (5 μg/kg) were higher than for goats receiving dexmedetomidine at 5 μg/kg at 30 minutes but not for goats receiving dexmedetomidine at 20 μg/kg. Creatinine and BUN concentrations, alanine or aspartate aminotransferase activities, and hematologic variables of treated goats did not change. Electroacupuncture in combination with a low dose of dexmedetomidine (5 μg/kg, IM) administered to goats provided antinociception.
The Management Standards Indicator Tool and evaluation of burnout.
Ravalier, J M; McVicar, A; Munn-Giddings, C
2013-03-01
Psychosocial hazards in the workplace can impact upon employee health. The UK Health and Safety Executive's (HSE) Management Standards Indicator Tool (MSIT) appears to have utility in relation to health impacts but we were unable to find studies relating it to burnout. To explore the utility of the MSIT in evaluating risk of burnout assessed by the Maslach Burnout Inventory-General Survey (MBI-GS). This was a cross-sectional survey of 128 borough council employees. MSIT data were analysed according to MSIT and MBI-GS threshold scores and by using multivariate linear regression with MBI-GS factors as dependent variables. MSIT factor scores were gradated according to categories of risk of burnout according to published MBI-GS thresholds, and identified priority workplace concerns as demands, relationships, role and change. These factors also featured as significant independent variables, with control, in outcomes of the regression analysis. Exhaustion was associated with demands and control (adjusted R (2) = 0.331); cynicism was associated with change, role and demands (adjusted R (2) =0.429); and professional efficacy was associated with managerial support, role, control and demands (adjusted R (2) = 0.413). MSIT analysis generally has congruence with MBI-GS assessment of burnout. The identification of control within regression models but not as a priority concern in the MSIT analysis could suggest an issue of the setting of the MSIT thresholds for this factor, but verification requires a much larger study. Incorporation of relationship, role and change into the MSIT, missing from other conventional tools, appeared to add to its validity.
Towards a clinically informed, data-driven definition of elderly onset epilepsy.
Josephson, Colin B; Engbers, Jordan D T; Sajobi, Tolulope T; Jette, Nathalie; Agha-Khani, Yahya; Federico, Paolo; Murphy, William; Pillay, Neelan; Wiebe, Samuel
2016-02-01
Elderly onset epilepsy represents a distinct subpopulation that has received considerable attention due to the unique features of the disease in this age group. Research into this particular patient group has been limited by a lack of a standardized definition and understanding of the attributes associated with elderly onset epilepsy. We used a prospective cohort database to examine differences in patients stratified according to age of onset. Linear support vector machine learning incorporating all significant variables was used to predict age of onset according to prespecified thresholds. Sensitivity and specificity were calculated and plotted in receiver-operating characteristic (ROC) space. Feature coefficients achieving an absolute value of 0.25 or greater were graphed by age of onset to define how they vary with time. We identified 2,449 patients, of whom 149 (6%) had an age of seizure onset of 65 or older. Fourteen clinical variables had an absolute predictive value of at least 0.25 at some point over the age of epilepsy-onset spectrum. Area under the curve in ROC space was maximized between ages of onset of 65 and 70. Features identified through machine learning were frequently threshold specific and were similar, but not identical, to those revealed through simple univariable and multivariable comparisons. This study provides an empirical, clinically informed definition of "elderly onset epilepsy." If validated, an age threshold of 65-70 years can be used for future studies of elderly onset epilepsy and permits targeted interventions according to the patient's age of onset. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2016-06-01
Since its introduction in 1954, the Soil Conservation Service curve number (SCS-CN) method has become the standard tool, in practice, for estimating an event-based rainfall-runoff response. However, because of its empirical origins, the SCS-CN method is restricted to certain geographic regions and land use types. Moreover, it does not describe the spatial variability of runoff. To move beyond these limitations, we present a new theoretical framework for spatially lumped, event-based rainfall-runoff modeling. In this framework, we describe the spatially lumped runoff model as a point description of runoff that is upscaled to a watershed area based on probability distributions that are representative of watershed heterogeneities. The framework accommodates different runoff concepts and distributions of heterogeneities, and in doing so, it provides an implicit spatial description of runoff variability. Heterogeneity in storage capacity and soil moisture are the basis for upscaling a point runoff response and linking ecohydrological processes to runoff modeling. For the framework, we consider two different runoff responses for fractions of the watershed area: "prethreshold" and "threshold-excess" runoff. These occur before and after infiltration exceeds a storage capacity threshold. Our application of the framework results in a new model (called SCS-CNx) that extends the SCS-CN method with the prethreshold and threshold-excess runoff mechanisms and an implicit spatial description of runoff. We show proof of concept in four forested watersheds and further that the resulting model may better represent geographic regions and site types that previously have been beyond the scope of the traditional SCS-CN method.
Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim
2016-12-01
Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.
Fernández-Muñoz, Juan J; Palacios-Ceña, María; Cigarán-Méndez, Margarita; Ortega-Santiago, Ricardo; de-la-Llave-Rincón, Ana I; Salom-Moreno, Jaime; Fernández-de-las-Peñas, César
2016-02-01
To investigate potential relationships of clinical (age, function, side of pain, years with pain), physical (cervical range of motion, pinch grip force), psychological (depression), and neurophysiological (pressure and thermal pain thresholds) outcomes and hand pain intensity in carpal tunnel syndrome (CTS). Two hundred and forty-four (n=224) women with CTS were recruited. Demographic data, duration of the symptoms, function and severity of the disease, pain intensity, depression, cervical range of motion, pinch tip grip force, heat/cold pain thresholds (HPT/CPT), and pressure pain thresholds (PPT) were collected. Correlation and regression analysis were performed to determine the association among those variables and to determine the proportions of explained variance in hand pain intensity. Significant negative correlations existed between the intensity of pain and PPTs over the radial nerve, C5/C6 zygapophyseal joint, carpal tunnel and tibialis anterior muscle, HPT over the carpal tunnel, cervical extension and lateral-flexion, and thumb-middle, fourth, and little finger pinch tip forces. Significant positive correlations between the intensity of hand pain with function and depression were also observed. Stepwise regression analyses revealed that function, thumb-middle finger pinch, thumb-little finger pinch, depression, PPT radial nerve, PPT carpal tunnel, and HPT carpal tunnel were significant predictors of intensity of hand pain (R²=0.364; R² adjusted=0.343; F=16.87; P<0.001). This study showed that 36.5% of the variance of pain intensity was associated to clinical (function), neurophysiological (localized PPT and HPT), psychological (depression), and physical (finger pinch tip force) outcomes in women with chronic CTS.
Secure Continuous Variable Teleportation and Einstein-Podolsky-Rosen Steering
NASA Astrophysics Data System (ADS)
He, Qiongyi; Rosales-Zárate, Laura; Adesso, Gerardo; Reid, Margaret D.
2015-10-01
We investigate the resources needed for secure teleportation of coherent states. We extend continuous variable teleportation to include quantum teleamplification protocols that allow nonunity classical gains and a preamplification or postattenuation of the coherent state. We show that, for arbitrary Gaussian protocols and a significant class of Gaussian resources, two-way steering is required to achieve a teleportation fidelity beyond the no-cloning threshold. This provides an operational connection between Gaussian steerability and secure teleportation. We present practical recipes suggesting that heralded noiseless preamplification may enable high-fidelity heralded teleportation, using minimally entangled yet steerable resources.
Forced and Unforced Variability of Twentieth Century North American Droughts and Pluvials
NASA Technical Reports Server (NTRS)
Cook, Benjamin I.; Cook, Edward R.; Anchukaitis, Kevin J.; Seager, Richard; Miller, Ron L.
2010-01-01
Research on the forcing of drought and pluvial events over North America is dominated by general circulation model experiments that often have operational limitations (e.g., computational expense, ability to simulate relevant processes, etc). We use a statistically based modeling approach to investigate sea surface temperature (SST) forcing of the twentieth century pluvial (1905-1917) and drought (1932-1939, 1948-1957, 1998-2002) events. A principal component (PC) analysis of Palmer Drought Severity Index (PDSI) from the North American Drought Atlas separates the drought variability into five leading modes accounting for 62% of the underlying variance. Over the full period spanning these events (1900-2005), the first three PCs significantly correlate with SSTs in the equatorial Pacific (PC 1), North Pacific (PC 2), and North Atlantic (PC 3), with spatial patterns (as defined by the empirical orthogonal functions) consistent with our understanding of North American drought responses to SST forcing. We use a large ensemble statistical modeling approach to determine how successfully we can reproduce these drought/pluvial events using these three modes of variability. Using Pacific forcing only (PCs 1-2), we are able to reproduce the 1948-1957 drought and 1905-1917 pluvial above a 95% random noise threshold in over 90% of the ensemble members; the addition of Atlantic forcing (PCs 1-2-3) provides only marginal improvement. For the 1998-2002 drought, Pacific forcing reproduces the drought above noise in over 65% of the ensemble members, with the addition of Atlantic forcing increasing the number passing to over 80%. The severity of the drought, however, is underestimated in the ensemble median, suggesting this drought intensity can only be achieved through internal variability or other processes. Pacific only forcing does a poor job of reproducing the 1932-1939 drought pattern in the ensemble median, and less than one third of ensemble members exceed the noise threshold (28%). Inclusion of Atlantic forcing improves the ensemble median drought pattern and nearly doubles the number of ensemble members passing the noise threshold (52%). Even with the inclusion of Atlantic forcing, the intensity of the simulated 1932-1939 drought is muted, and the drought itself extends too far into the southwest and southern Great Plains. To an even greater extent than the 1998-2002 drought, these results suggest much of the variance in the 1932-1939 drought is dependent on processes other than SST forcing. This study highlights the importance of internal noise and non SST processes for hydroclimatic variability over North America, complementing existing research using general circulation models.
Manzoni, Paolo; Memo, Luigi; Mostert, Michael; Gallo, Elena; Guardione, Roberta; Maestri, Andrea; Saia, Onofrio Sergio; Opramolla, Anna; Calabrese, Sara; Tavella, Elena; Luparia, Martina; Farina, Daniele
2014-09-01
Retinopathy of prematurity (ROP) is a multifactorial disease with evidence of many associated risk factors. Erythropoietin has been reported to be associated with this disorder in a murine model, as well as in humans in some single-center reports. We reviewed the data from two large tertiary NICUs in Italy to test the hypothesis that the use of erythropoietin may be associated with the development of the most severe stages of ROP in extremely low birth weight (ELBW) neonates. Retrospective study by review of patient charts and eye examination index cards on infants with birth weight <1000g admitted to two large tertiary NICUs in Northern Italy (Sant'Anna Hospital NICU in Torino, and Ca' Foncello Hospital Neonatology in Treviso) in the years 2005 to 2007. Standard protocol of administration of EPO in the two NICUs consisted of 250 UI/kg three times a week for 6-week courses (4-week in 1001-1500g infants). Univariate analysis was performed to assess whether the use of EPO was associated with severe (threshold) ROP. A control, multivariate statistical analysis was performed by entering into a logistic regression model a number of neonatal and perinatal variables that - in univariate analysis - had been associated with threshold ROP. During the study period, 211 ELBW infants were born at the two facilities and survived till discharge. Complete data were obtained for 197 of them. Threshold retinopathy of prematurity occurred in 26.9% (29 of 108) of ELBW infants who received erythropoietin therapy, as compared with 13.5% (12 of 89) of those who did not receive erythropoietin (OR 2.35; 95% CI 1.121-4.949; p=0.02 in univariate analysis, and p=0.04 at multivariate logistic regression after controlling for the following variables: birth weight, gestational age, days on supplemental oxygen, systemic fungal infection, vaginal delivery). Use of erythropoietin was not significantly associated with other major sequelae of prematurity (intraventricular hemorrhage, bronchopulmonary dysplasia, necrotizing enterocolitis). © 2014 Elsevier Ireland Ltd. All rights reserved. Use of erythropoietin is an additional, independent predictor of threshold ROP in ELBW neonates. Larger prospective, population-based studies should further clarify the extent of this association. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berryman, E.; Barnard, H. R.; Brooks, P. D.; Adams, H.; Burns, M. A.; Wilson, W.; Stielstra, C. M.
2013-12-01
A current ecohydrological challenge is quantifying the exact nature of carbon (C) and water couplings across landscapes. An emerging framework of understanding places plant physiological processes as a central control over soil respiration, the largest source of CO2 to the atmosphere. In dry montane forests, spatial and temporal variability in forest physiological processes are governed by hydrological patterns. Critical feedbacks involving respiration, moisture supply and tree physiology are poorly understood and must be quantified at the landscape level to better predict carbon cycle implications of regional drought under future climate change. We present data from an experiment designed to capture landscape variability in key coupled hydrological and C processes in forests of Colorado's Front Range. Sites encompass three catchments within the Boulder Creek watershed, range from 1480 m to 3021 m above sea level and are co-located with the DOE Niwot Ridge Ameriflux site and the Boulder Creek Critical Zone Observatory. Key hydrological measurements (soil moisture, transpiration) are coupled with soil respiration measurements within each catchment at different landscape positions. This three-dimensional study design also allows for the examination of the role of water subsidies from uplands to lowlands in controlling respiration. Initial findings from 2012 reveal a moisture threshold response of the sensitivity of soil respiration to temperature. This threshold may derive from tree physiological responses to variation in moisture availability, which in turn is controlled by the persistence of snowpack. Using data collected in 2013, first, we determine whether respiration moisture thresholds represent triggers for transpiration at the individual tree level. Next, using stable isotope ratios of soil respiration and xylem and soil water, we compare the depths of respiration to depths of water uptake to assign tree vs. understory sources of respiration. This will help determine whether tree root-zone respiration exhibits a similar moisture threshold. Lastly, we examine whether moisture thresholds to temperature sensitivity are consistent across a range of snowpack persistence. Findings are compared to data collected from sites in Arizona and New Mexico to better establish the role of winter precipitation in governing growing season respiration rates. The outcome of this study will contribute to a better understanding of linkages among water, tree physiology, and soil respiration with the ultimate goal of scaling plot-level respiration fluxes to entire catchments.
Multi-scale landscape factors influencing stream water quality in the state of Oregon.
Nash, Maliha S; Heggem, Daniel T; Ebert, Donald; Wade, Timothy G; Hall, Robert K
2009-09-01
Enterococci bacteria are used to indicate the presence of human and/or animal fecal materials in surface water. In addition to human influences on the quality of surface water, a cattle grazing is a widespread and persistent ecological stressor in the Western United States. Cattle may affect surface water quality directly by depositing nutrients and bacteria, and indirectly by damaging stream banks or removing vegetation cover, which may lead to increased sediment loads. This study used the State of Oregon surface water data to determine the likelihood of animal pathogen presence using enterococci and analyzed the spatial distribution and relationship of biotic (enterococci) and abiotic (nitrogen and phosphorous) surface water constituents to landscape metrics and others (e.g. human use, percent riparian cover, natural covers, grazing, etc.). We used a grazing potential index (GPI) based on proximity to water, land ownership and forage availability. Mean and variability of GPI, forage availability, stream density and length, and landscape metrics were related to enterococci and many forms of nitrogen and phosphorous in standard and logistic regression models. The GPI did not have a significant role in the models, but forage related variables had significant contribution. Urban land use within stream reach was the main driving factor when exceeding the threshold (> or =35 cfu/100 ml), agriculture was the driving force in elevating enterococci in sites where enterococci concentration was <35 cfu/100 ml. Landscape metrics related to amount of agriculture, wetlands and urban all contributed to increasing nutrients in surface water but at different scales. The probability of having sites with concentrations of enterococci above the threshold was much lower in areas of natural land cover and much higher in areas with higher urban land use within 60 m of stream. A 1% increase in natural land cover was associated with a 12% decrease in the predicted odds of having a site exceeding the threshold. Opposite to natural land cover, a one unit change in each of manmade barren and urban land use led to an increase of the likelihood of exceeding the threshold by 73%, and 11%, respectively. Change in urban land use had a higher influence on the likelihood of a site exceeding the threshold than that of natural land cover.
NASA Astrophysics Data System (ADS)
Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.
2017-12-01
Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.
Werner-Wasik, Maria; Nelson, Arden D; Choi, Walter; Arai, Yoshio; Faulhaber, Peter F; Kang, Patrick; Almeida, Fabio D; Xiao, Ying; Ohri, Nitin; Brockway, Kristin D; Piper, Jonathan W; Nelson, Aaron S
2012-03-01
To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10-37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7-264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. For spheres <20 mm in diameter, GRADIENT was the most accurate with a mean absolute % error in diameter of 8.15% (10.2% SD) compared with 49.2% (51.1% SD) for 45% THRESHOLD (p < 0.005). For larger spheres, the methods were statistically equivalent. For varying source-to-background ratios, GRADIENT was the most accurate for spheres >20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of -0.05% (16.2% SD) compared with 25% THRESHOLD at -2.1% (34.2% SD) and MANUAL at -16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene's test). GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in radiation therapy planning and response assessment. Copyright © 2012. Published by Elsevier Inc.
The origin of Total Solar Irradiance variability on timescales less than a day
NASA Astrophysics Data System (ADS)
Shapiro, Alexander; Krivova, Natalie; Schmutz, Werner; Solanki, Sami K.; Leng Yeo, Kok; Cameron, Robert; Beeck, Benjamin
2016-07-01
Total Solar Irradiance (TSI) varies on timescales from minutes to decades. It is generally accepted that variability on timescales of a day and longer is dominated by solar surface magnetic fields. For shorter time scales, several additional sources of variability have been proposed, including convection and oscillation. However, available simplified and highly parameterised models could not accurately explain the observed variability in high-cadence TSI records. We employed the high-cadence solar imagery from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory and the SATIRE (Spectral And Total Irradiance Reconstruction) model of solar irradiance variability to recreate the magnetic component of TSI variability. The recent 3D simulations of solar near-surface convection with MURAM code have been used to calculate the TSI variability caused by convection. This allowed us to determine the threshold timescale between TSI variability caused by the magnetic field and convection. Our model successfully replicates the TSI measurements by the PICARD/PREMOS radiometer which span the period of July 2010 to February 2014 at 2-minute cadence. Hence, we demonstrate that solar magnetism and convection can account for TSI variability at all timescale it has ever been measured (sans the 5-minute component from p-modes).
NASA Astrophysics Data System (ADS)
Hodell, D. A.; Nicholl, J.
2013-12-01
During the Middle Pleistocene Transition (MPT), the climate system evolved from a more linear response to insolation forcing in the '41-kyr world' to one that was decidedly non-linear in the '100-kyr world'. Smaller ice sheets in the early Pleistocene gave way to larger ice sheets in the late Pleistocene with an accompanying change in ice sheet dynamics. We studied Sites U1308 (49° 52.7'N, 24° 14.3'W; 3871 m) and U1304 (53° 3.4'N, 33° 31.8'W; 3024 m) in the North Atlantic to determine how ice sheet dynamics and millennial-scale climate variability evolved as glacial boundary conditions changed across the MPT. The frequency of ice-rafted detritus (IRD) in the North Atlantic was greater during glacial stages prior to 650 ka (MIS 16), reflecting more frequent crossing of an ice volume threshold when the climate system spent more time in the 'intermediate ice volume' window, resulting in persistent millennial scale variability. The rarity of Heinrich Events containing detrital carbonate and more frequent occurrence of IRD events prior to 650 ka may indicate the presence of 'low-slung, slippery ice sheets' that flowed more readily than their post-MPT counterparts (Bailey et al., 2010). Ice volume surpassed a critical threshold across the MPT that permitted ice sheets to survive boreal summer insolation maxima, thereby increasing ice volume and thickness, lengthening glacial cycles, and activating the dynamical processes responsible for Laurentide Ice Sheet instability in the region of Hudson Strait (i.e., Heinrich events). The excess ice volume during post-MPT glacial maxima provided a large, unstable reservoir of freshwater to be released to the North Atlantic during glacial terminations with the potential to perturb Atlantic Meridional Overtunring Circulation. We speculate that orbital- and millennial-scale variability co-evolved across the MPT and the interaction of processes on orbital and suborbital time scales gave rise to the changing patterns of glacial-interglacial cycles through the Quaternary. Bailey, I., Bolton, C.T., DeConto, R.M., Pollard, D., Schiebel, R. and Wilson, P.A. (2010) A low threshold for North Atlantic ice rafting from "low-slung slippery" late Pliocene ice sheets. Paleoceanography, 25, PA1212-[14pp]. (doi:10.1029/2009PA001736).
USDA-ARS?s Scientific Manuscript database
The objective of this study was to analyze the association between hematological parameters (CBC) and gender at stocker receiving facility arrival and the risk of subsequent clinical bovine respiratory disease (BRD) diagnosis, and (2) to determine and evaluate the accuracy of CBC parameter threshold...
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2013-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope…
2012-09-01
interpreting the state vector as the health indicator and a threshold is used on this variable in order to compute EOL (end-of-life) and RUL. Here, we...End-of-life ( EOL ) would match the true spread and would not change from one experiment to another. This is, however, in practice impossible to achieve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vadasz, C.; Fleischer, A.; Carpi, D.
1995-02-27
Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of themore » spike-and-wave pattern. We propose that both genetic and developmental - environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges. 67 refs., 3 figs., 5 tabs.« less
Across-channel interference in intensity discrimination: The role of practice and listening strategy
Buss, Emily
2008-01-01
Pure tone intensity discrimination thresholds can be elevated by the introduction of remote maskers with roved level. This effect is on the order of 10 dB [10log(ΔI/I)] in some conditions and can be demonstrated under conditions of little or no energetic masking. The current study examined the effect of practice and observer strategy on this phenomenon. Experiment 1 included observers who had no formal experience with intensity discrimination and provided training over six hours on a single masked intensity discrimination task to assess learning effects. Thresholds fell with practice for most observers, with significant improvements in 6 out of 8 cases. Despite these improvements significant masking remained in all cases. The second experiment assessed trial-by-trial effects of roved masker level. Conditional probability of a ‘signal-present’ response as a function of the rove value assigned to each of the two masker tones indicates fundamental differences among observers’ processing strategies, even after six hours of practice. The variability in error patterns across practiced listeners suggests that observers approach the task differently, though this variability does not appear to be related to sensitivity. PMID:18177156
Analysis of continuous-time switching networks
NASA Astrophysics Data System (ADS)
Edwards, R.
2000-11-01
Models of a number of biological systems, including gene regulation and neural networks, can be formulated as switching networks, in which the interactions between the variables depend strongly on thresholds. An idealized class of such networks in which the switching takes the form of Heaviside step functions but variables still change continuously in time has been proposed as a useful simplification to gain analytic insight. These networks, called here Glass networks after their originator, are simple enough mathematically to allow significant analysis without restricting the range of dynamics found in analogous smooth systems. A number of results have been obtained before, particularly regarding existence and stability of periodic orbits in such networks, but important cases were not considered. Here we present a coherent method of analysis that summarizes previous work and fills in some of the gaps as well as including some new results. Furthermore, we apply this analysis to a number of examples, including surprising long and complex limit cycles involving sequences of hundreds of threshold transitions. Finally, we show how the above methods can be extended to investigate aperiodic behaviour in specific networks, though a complete analysis will have to await new results in matrix theory and symbolic dynamics.