Is it valid to calculate the 3-kilohertz threshold by averaging 2 and 4 kilohertz?
Gurgel, Richard K; Popelka, Gerald R; Oghalai, John S; Blevins, Nikolas H; Chang, Kay W; Jackler, Robert K
2012-07-01
Many guidelines for reporting hearing results use the threshold at 3 kilohertz (kHz), a frequency not measured routinely. This study assessed the validity of estimating the missing 3-kHz threshold by averaging the measured thresholds at 2 and 4 kHz. The estimated threshold was compared to the measured threshold at 3 kHz individually and when used in the pure-tone average (PTA) of 0.5, 1, 2, and 3 kHz in audiometric data from 2170 patients. The difference between the estimated and measured thresholds for 3 kHz was within ± 5 dB in 72% of audiograms, ± 10 dB in 91%, and within ± 20 dB in 99% (correlation coefficient r = 0.965). The difference between the PTA threshold using the estimated threshold compared with using the measured threshold at 3 kHz was within ± 5 dB in 99% of audiograms (r = 0.997). The estimated threshold accurately approximates the measured threshold at 3 kHz, especially when incorporated into the PTA.
Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C
2002-08-01
To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.
Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth
2018-03-26
When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.
Electroconvulsive therapy stimulus titration: Not all it seems.
Rosenman, Stephen J
2018-05-01
To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.
Burr, Tom; Hamada, Michael S.; Howell, John; ...
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Pang; Yu, Yue
2017-05-01
This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..
On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?
Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro
2016-01-01
Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Influence of Spatial and Chromatic Noise on Luminance Discrimination.
Miquilini, Leticia; Walker, Natalie A; Odigie, Erika A; Guimarães, Diego Leite; Salomão, Railson Cruz; Lacerda, Eliza Maria Costa Brito; Cortes, Maria Izabel Tentes; de Lima Silveira, Luiz Carlos; Fitzgerald, Malinda E C; Ventura, Dora Fix; Souza, Givago Silva
2017-12-05
Pseudoisochromatic figures are designed to base discrimination of a chromatic target from a background solely on the chromatic differences. This is accomplished by the introduction of luminance and spatial noise thereby eliminating these two dimensions as cues. The inverse rationale could also be applied to luminance discrimination, if spatial and chromatic noise are used to mask those cues. In this current study estimate of luminance contrast thresholds were conducted using a novel stimulus, based on the use of chromatic and spatial noise to mask the use of these cues in a luminance discrimination task. This was accomplished by presenting stimuli composed of a mosaic of circles colored randomly. A Landolt-C target differed from the background only by the luminance. The luminance contrast thresholds were estimated for different chromatic noise saturation conditions and compared to luminance contrast thresholds estimated using the same target in a non-mosaic stimulus. Moreover, the influence of the chromatic content in the noise on the luminance contrast threshold was also investigated. Luminance contrast threshold was dependent on the chromaticity noise strength. It was 10-fold higher than thresholds estimated from non-mosaic stimulus, but they were independent of colour space location in which the noise was modulated. The present study introduces a new method to investigate luminance vision intended for both basic science and clinical applications.
Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses
Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali
2017-01-01
Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880
Estimating the epidemic threshold on networks by deterministic connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less
Psychophysical measurements in children: challenges, pitfalls, and considerations.
Witton, Caroline; Talcott, Joel B; Henning, G Bruce
2017-01-01
Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.
Comparison between ABR with click and narrow band chirp stimuli in children.
Zirn, Stefan; Louza, Julia; Reiman, Viktor; Wittlinger, Natalie; Hempel, John-Martin; Schuster, Maria
2014-08-01
Click and chirp-evoked auditory brainstem responses (ABR) are applied for the estimation of hearing thresholds in children. The present study analyzes ABR thresholds across a large sample of children's ears obtained with both methods. The aim was to demonstrate the correlation between both methods using narrow band chirp and click stimuli. Click and chirp evoked ABRs were measured in 253 children aged from 0 to 18 years to determine their individual auditory threshold. The delay-compensated stimuli were narrow band CE chirps with either 2000 Hz or 4000 Hz center frequencies. Measurements were performed consecutively during natural sleep, and under sedation or general anesthesia. Threshold estimation was performed for each measurement by two experienced audiologists. Pearson-correlation analysis revealed highly significant correlations (r=0.94) between click and chirp derived thresholds for both 2 kHz and 4 kHz chirps. No considerable differences were observed either between different age ranges or gender. Comparing the thresholds estimated using ABR with click stimuli and chirp stimuli, only 0.8-2% for the 2000 Hz NB-chirp and 0.4-1.2% of the 4000 Hz NB-chirp measurements differed more than 15 dB for different degrees of hearing loss or normal hearing. The results suggest that either NB-chirp or click ABR is sufficient for threshold estimation. This holds for the chirp frequencies of 2000 Hz and 4000 Hz. The use of either click- or chirp-evoked ABR allows a reduction of recording time in young infants. Nevertheless, to cross-check the results of one of the methods, we recommend measurements with the other method as well. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... estimated cost of the case exceeds the adjusted outlier threshold. We calculate the adjusted outlier... to 80 percent of the difference between the estimated cost of the case and the outlier threshold. In... Federal Prospective Payment Rates VI. Update to Payments for High-Cost Outliers under the IRF PPS A...
USDA-ARS?s Scientific Manuscript database
(Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Smith, Stephen W.
2007-01-01
Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.
Generalised form of a power law threshold function for rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders
2010-05-01
The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.
NASA Astrophysics Data System (ADS)
Kazanskiĭ, P. G.
1989-02-01
A threshold of photoinduced conversion of an ordinary wave into an extraordinary one was discovered for lithium niobate optical waveguides. The threshold intensity of the radiation was determined for waveguides prepared under different conditions. The experimental results were compared with theoretical estimates.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
1997-01-01
One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.
Psychophysical estimation of speed discrimination. II. Aging effects
NASA Astrophysics Data System (ADS)
Raghuram, Aparna; Lakshminarayanan, Vasudevan; Khanna, Ritu
2005-10-01
We studied the effects of aging on a speed discrimination task using a pair of first-order drifting luminance gratings. Two reference speeds of 2 and 8 deg/s were presented at stimulus durations of 500 ms and 1000 ms. The choice of stimulus parameters, etc., was determined in preliminary experiments and described in Part I. Thresholds were estimated using a two-alternative-forced-choice staircase methodology. Data were collected from 16 younger subjects (mean age 24 years) and 17 older subjects (mean age 71 years). Results showed that thresholds for speed discrimination were higher for the older age group. This was especially true at stimulus duration of 500 ms for both slower and faster speeds. This could be attributed to differences in temporal integration of speed with age. Visual acuity and contrast sensitivity were not statistically observed to mediate age differences in the speed discrimination thresholds. Gender differences were observed in the older age group, with older women having higher thresholds.
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Saxena, Udit; Allan, Chris; Allen, Prudence
2017-06-01
Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
Finneran, James J; Houser, Dorian S
2006-05-01
Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Wavelet methodology to improve single unit isolation in primary motor cortex cells
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.
2016-01-01
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Patient cost-sharing, socioeconomic status, and children's health care utilization.
Nilsson, Anton; Paul, Alexander
2018-05-01
This paper estimates the effect of cost-sharing on the demand for children's and adolescents' use of medical care. We use a large population-wide registry dataset including detailed information on contacts with the health care system as well as family income. Two different estimation strategies are used: regression discontinuity design exploiting age thresholds above which fees are charged, and difference-in-differences models exploiting policy changes. We also estimate combined regression discontinuity difference-in-differences models that take into account discontinuities around age thresholds caused by factors other than cost-sharing. We find that when care is free of charge, individuals increase their number of doctor visits by 5-10%. Effects are similar in middle childhood and adolescence, and are driven by those from low-income families. The differences across income groups cannot be explained by other factors that correlate with income, such as maternal education. Copyright © 2018 Elsevier B.V. All rights reserved.
Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.
Rhudy, Jamie L; France, Christopher R
2011-07-01
The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N
2014-12-01
Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.
Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.
2013-01-01
The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A
2017-03-01
Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Wavelet methodology to improve single unit isolation in primary motor cortex cells.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2015-05-15
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.
Calculating the dim light melatonin onset: the impact of threshold and sampling rate.
Molina, Thomas A; Burgess, Helen J
2011-10-01
The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
NASA Astrophysics Data System (ADS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-05-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.
NASA Technical Reports Server (NTRS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-01-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.
Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori
2016-10-01
To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10 -8 , the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were P sig =3.24 × 10 -8 (AFR), 9.26 × 10 -8 (EUR), 1.83 × 10 -7 (AMR), 1.61 × 10 -7 (EAS) and 9.46 × 10 -8 (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded P sig =3.25 × 10 -8 (ALL) and 4.20 × 10 -8 (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10 -8 ) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples.
NASA Astrophysics Data System (ADS)
Bitenc, M.; Kieffer, D. S.; Khoshelham, K.
2015-08-01
The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Thermal sensitivity and cardiovascular reactivity to stress in healthy males.
Conde-Guzón, Pablo Antonio; Bartolomé-Albistegui, María Teresa; Quirós, Pilar; Cabestrero, Raúl
2011-11-01
This paper examines the association of cardiovascular reactivity with thermal thresholds (detection and unpleasantness). Heart period (HP), systolic (SBP) and diastolic (DBP) blood pressure of 42 health young males were recorded during a cardiovascular reactivity task (a videogame based upon Sidman's avoidance paradigm). Thermal sensitivity, assessing detection and unpleasantness thresholds with radiant heat in the forearm was also estimated for participants. Participants with differential scores in the cardiovascular variables from base line to task > or = P65 were considered as reactors and those how have differential scores < or = P35 were considered as non-reactors. Significant differences were observed between groups in the unpleasantness thresholds in blood pressure (BP) but not in HP. Reactors exhibited significant higher unpleasantness thresholds than non-reactors. No significant differences were obtained in detection thresholds between groups.
Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R
2010-01-01
Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.
Comparison of algorithms of testing for use in automated evaluation of sensation.
Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M
1990-10-01
Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.
Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi
2018-06-01
Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.
Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark
2015-02-01
Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability that it is < £30,000 per QALY is 0.97. Additional 'structural' uncertainty suggests, on balance, that the central or best estimate is, if anything, likely to be an overestimate. The health effects of changes in expenditure are greater when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.
Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger
2018-05-10
Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.
Classification criteria and probability risk maps: limitations and perspectives.
Saisana, Michaela; Dubois, Gregoire; Chaloulakou, Archontoula; Spyrellis, Nikolas
2004-03-01
Delineation of polluted zones with respect to regulatory standards, accounting at the same time for the uncertainty of the estimated concentrations, relies on classification criteria that can lead to significantly different pollution risk maps, which, in turn, can depend on the regulatory standard itself. This paper reviews four popular classification criteria related to the violation of a probability threshold or a physical threshold, using annual (1996-2000) nitrogen dioxide concentrations from 40 air monitoring stations in Milan. The relative advantages and practical limitations of each criterion are discussed, and it is shown that some of the criteria are more appropriate for the problem at hand and that the choice of the criterion can be supported by the statistical distribution of the data and/or the regulatory standard. Finally, the polluted area is estimated over the different years and concentration thresholds using the appropriate risk maps as an additional source of uncertainty.
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Iversen, J. D.; White, B. R.; Pollack, J. B.; Greeley, R.
1976-01-01
Results are reported for wind-tunnel experiments performed to determine the threshold friction speed of particles with different densities. Experimentally determined threshold speeds are plotted as a function of particle diameter and in terms of threshold parameter vs particle friction Reynolds number. The curves are compared with those of previous experiments, and an A-B curve is plotted to show differences in threshold speed due to differences in size distributions and particle shapes. Effects of particle diameter are investigated, an expression for threshold speed is derived by considering the equilibrium forces acting on a single particle, and other approximately valid expressions are evaluated. It is shown that the assumption of universality of the A-B curve is in error at very low pressures for small particles and that only predictions which take account of both Reynolds number and effects of interparticle forces yield reasonable agreement with experimental data. Effects of nonerodible surface roughness are examined, and threshold speeds computed with allowance for this factor are compared with experimental values. Threshold friction speeds on Mars are then estimated for a surface pressure of 5 mbar, taking into account all the factors considered.
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Archambault, L.; Starkschall, G.
2007-11-15
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less
Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S
2007-11-01
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
Effect of skin-transmitted vibration enhancement on vibrotactile perception.
Tanaka, Yoshihiro; Ueda, Yuichiro; Sano, Akihito
2015-06-01
Vibration on skin elicited by the mechanical interaction of touch between the skin and an object propagates to skin far from the point of contact. This paper investigates the effect of skin-transmitted vibration on vibrotactile perception. To enhance the transmission of high-frequency vibration on the skin, stiff tape was attached to the skin so that the tape covered the bottom surface of the index finger from the periphery of the distal interphalangeal joint to the metacarpophalangeal joint. Two psychophysical experiments with high-frequency vibrotactile stimuli of 250 Hz were conducted. In the psychophysical experiments, discrimination and detection thresholds were estimated and compared between conditions of the presence or the absence of the tape (normal bare finger). A method of limits was applied for the detection threshold estimation, and the discrimination task using a reference stimulus and six test stimuli with different amplitudes was applied for the discrimination threshold estimation. The stimulation was given to bare fingertips of participants. Result showed that the detection threshold was enhanced by attaching the tape, and the discrimination threshold enhancement by attaching the tape was confirmed for participants who have relatively large discrimination threshold under normal bare finger. Then, skin-transmitted vibration was measured with an accelerometer with the psychophysical experiments. Result showed that the skin-transmitted vibration when the tape was attached to the skin was larger than that when normal bare skin. There is a correlation between the increase in skin-transmitted vibration and the enhancement of the discrimination threshold.
Forutan, M; Ansari Mahyari, S; Sargolzaei, M
2015-02-01
Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
Bayesian estimation of dose thresholds
NASA Technical Reports Server (NTRS)
Groer, P. G.; Carnes, B. A.
2003-01-01
An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.
Reliability and validity of a short form household food security scale in a Caribbean community.
Gulliford, Martin C; Mahabir, Deepak; Rocke, Brian
2004-06-16
We evaluated the reliability and validity of the short form household food security scale in a different setting from the one in which it was developed. The scale was interview administered to 531 subjects from 286 households in north central Trinidad in Trinidad and Tobago, West Indies. We evaluated the six items by fitting item response theory models to estimate item thresholds, estimating agreement among respondents in the same households and estimating the slope index of income-related inequality (SII) after adjusting for age, sex and ethnicity. Item-score correlations ranged from 0.52 to 0.79 and Cronbach's alpha was 0.87. Item responses gave within-household correlation coefficients ranging from 0.70 to 0.78. Estimated item thresholds (standard errors) from the Rasch model ranged from -2.027 (0.063) for the 'balanced meal' item to 2.251 (0.116) for the 'hungry' item. The 'balanced meal' item had the lowest threshold in each ethnic group even though there was evidence of differential functioning for this item by ethnicity. Relative thresholds of other items were generally consistent with US data. Estimation of the SII, comparing those at the bottom with those at the top of the income scale, gave relative odds for an affirmative response of 3.77 (95% confidence interval 1.40 to 10.2) for the lowest severity item, and 20.8 (2.67 to 162.5) for highest severity item. Food insecurity was associated with reduced consumption of green vegetables after additionally adjusting for income and education (0.52, 0.28 to 0.96). The household food security scale gives reliable and valid responses in this setting. Differing relative item thresholds compared with US data do not require alteration to the cut-points for classification of 'food insecurity without hunger' or 'food insecurity with hunger'. The data provide further evidence that re-evaluation of the 'balanced meal' item is required.
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less
Translucency thresholds for dental materials.
Salas, Marianne; Lucena, Cristina; Herrera, Luis Javier; Yebra, Ana; Della Bona, Alvaro; Pérez, María M
2018-05-12
To determine the translucency acceptability and perceptibility thresholds for dental resin composites using CIEDE2000 and CIELAB color difference formulas. A 30-observer panel performed perceptibility and acceptability judgments on 50 pairs of resin composites discs (diameter: 10mm; thickness: 1mm). Disc pair differences for the Translucency Parameter (ΔTP) were calculated using both color difference formulas (ΔTP 00 ranged from 0.11 to 7.98, and ΔTP ab ranged from 0.01 to 12.79). A Takagi-Sugeno-Kang (TSK) Fuzzy Approximation was used as fitting procedure. From the resultant fitting curves, the 95% confidence intervals were estimated and the 50:50% translucency perceptibility and acceptability thresholds (TPT and TAT) were calculated. Differences between thresholds were statistically analyzed using Student t tests (α=0.05). CIEDE2000 50:50% TPT was 0.62 and TAT was 2.62. Corresponding CIELAB values were 1.33 and 4.43, respectively. Translucency perceptibility and acceptability thresholds were significantly different using both color difference formulas (p=0.01 for TPT and p=0.005 for TAT). CIEDE2000 color difference formula provided a better data fit than CIELAB formula. The visual translucency difference thresholds determined with CIEDE2000 color difference formula can serve as reference values in the selection of resin composites and evaluation of its clinical performance. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
Dotan, Raffy
2012-06-01
The multisession maximal lactate steady-state (MLSS) test is the gold standard for anaerobic threshold (AnT) estimation. However, it is highly impractical, requires high fitness level, and suffers additional shortcomings. Existing single-session AnT-estimating tests are of compromised validity, reliability, and resolution. The presented reverse lactate threshold test (RLT) is a single-session, AnT-estimating test, aimed at avoiding the pitfalls of existing tests. It is based on the novel concept of identifying blood lactate's maximal appearance-disappearance equilibrium by approaching the AnT from higher, rather than from lower exercise intensities. Rowing, cycling, and running case data (4 recreational and competitive athletes, male and female, aged 17-39 y) are presented. Subjects performed the RLT test and, on a separate session, a single 30-min MLSS-type verification test at the RLT-determined intensity. The RLT and its MLSS verification exhibited exceptional agreement at 0.5% discrepancy or better. The RLT's training sensitivity was demonstrated by a case of 2.5-mo training regimen following which the RLT's 15-W improvement was fully MLSS-verified. The RLT's test-retest reliability was examined in 10 trained and untrained subjects. Test 2 differed from test 1 by only 0.3% with an intraclass correlation of 0.997. The data suggest RLT to accurately and reliably estimate AnT (as represented by MLSS verification) with high resolution and in distinctly different sports and to be sensitive to training adaptations. Compared with MLSS, the single-session RLT is highly practical and its lower fitness requirements make it applicable to athletes and untrained individuals alike. Further research is needed to establish RLT's validity and accuracy in larger samples.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly
2016-12-01
To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
Estimating Allee dynamics before they can be observed: polar bears as a case study.
Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.
Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study
Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Numerosity but not texture-density discrimination correlates with math ability in children.
Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C
2016-08-01
Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively sparse, segregatable items (24 dots); numerosity of very dense textured patterns (250 dots); and discrimination of direction of motion. Thresholds in all tasks improved with age, but at different rates, implying the action of different mechanisms: In particular, in young children, thresholds were lower for sparse than textured patterns (the opposite of adults), suggesting earlier maturation of numerosity mechanisms. Importantly, numerosity thresholds for sparse stimuli correlated strongly with math skills, even after controlling for the influence of age, gender and nonverbal IQ. However, neither motion-direction discrimination nor numerosity discrimination of texture patterns showed a significant correlation with math abilities. These results provide further evidence that numerosity and texture-density are perceived by independent neural mechanisms, which develop at different rates; and importantly, only numerosity mechanisms are related to math. As developmental dyscalculia is characterized by a profound deficit in discriminating numerosity, it is fundamental to understand the mechanism behind the discrimination. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick
2017-04-28
Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.
Estimating daily climatologies for climate indices derived from climate model data and observations
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof
2015-01-01
Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192
Gustafson, Samantha; Pittman, Andrea; Fanning, Robert
2013-06-01
This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
Energy thresholds of discrete breathers in thermal equilibrium and relaxation processes.
Ming, Yi; Ling, Dong-Bo; Li, Hui-Min; Ding, Ze-Jun
2017-06-01
So far, only the energy thresholds of single discrete breathers in nonlinear Hamiltonian systems have been analytically obtained. In this work, the energy thresholds of discrete breathers in thermal equilibrium and the energy thresholds of long-lived discrete breathers which can remain after a long time relaxation are analytically estimated for nonlinear chains. These energy thresholds are size dependent. The energy thresholds of discrete breathers in thermal equilibrium are the same as the previous analytical results for single discrete breathers. The energy thresholds of long-lived discrete breathers in relaxation processes are different from the previous results for single discrete breathers but agree well with the published numerical results known to us. Because real systems are either in thermal equilibrium or in relaxation processes, the obtained results could be important for experimental detection of discrete breathers.
Rodbard, David
2012-10-01
We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.
Massof, Robert W
2014-10-01
A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig
2018-05-11
Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.
Perceptual color difference metric including a CSF based on the perception threshold
NASA Astrophysics Data System (ADS)
Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine
2008-01-01
The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation
Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin
2012-01-01
Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721
Flood return level analysis of Peaks over Threshold series under changing climate
NASA Astrophysics Data System (ADS)
Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.
2016-12-01
Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.
The hockey-stick method to estimate evening dim light melatonin onset (DLMO) in humans.
Danilenko, Konstantin V; Verevkin, Evgeniy G; Antyufeev, Viktor S; Wirz-Justice, Anna; Cajochen, Christian
2014-04-01
The onset of melatonin secretion in the evening is the most reliable and most widely used index of circadian timing in humans. Saliva (or plasma) is usually sampled every 0.5-1 hours under dim-light conditions in the evening 5-6 hours before usual bedtime to assess the dim-light melatonin onset (DLMO). For many years, attempts have been made to find a reliable objective determination of melatonin onset time either by fixed or dynamic threshold approaches. The here-developed hockey-stick algorithm, used as an interactive computer-based approach, fits the evening melatonin profile by a piecewise linear-parabolic function represented as a straight line switching to the branch of a parabola. The switch point is considered to reliably estimate melatonin rise time. We applied the hockey-stick method to 109 half-hourly melatonin profiles to assess the DLMOs and compared these estimates to visual ratings from three experts in the field. The DLMOs of 103 profiles were considered to be clearly quantifiable. The hockey-stick DLMO estimates were on average 4 minutes earlier than the experts' estimates, with a range of -27 to +13 minutes; in 47% of the cases the difference fell within ±5 minutes, in 98% within -20 to +13 minutes. The raters' and hockey-stick estimates showed poor accordance with DLMOs defined by threshold methods. Thus, the hockey-stick algorithm is a reliable objective method to estimate melatonin rise time, which does not depend on a threshold value and is free from errors arising from differences in subjective circadian phase estimates. The method is available as a computerized program that can be easily used in research settings and clinical practice either for salivary or plasma melatonin values.
Watson, Paul J; Latif, R Khalid; Rowbotham, David J
2005-11-01
The expression and report of pain is influenced by social environment and culture. Previous studies have suggested ethnically determined differences in report of pain threshold, intensity and affect. The influence of ethnic differences between White British and South Asians has remained unexplored. Twenty age-matched, male volunteers in each group underwent evaluation. Cold and warm perception and cold and heat threshold were assessed using an ascending method of limits. Magnitude estimation of pain unpleasantness and pain intensity were investigated with thermal stimuli of 46, 47, 48 and 49 degrees C. Subjects also completed a pain anxiety questionnaire. Data was analysed using t-test, Mann-Whitney and repeated measures analysis of variance as appropriate. There were no differences in cold and warm perception between the two groups. There was a statistically significant difference between the two groups for heat pain threshold (P=0.006) and heat pain intensity demonstrated a significant effect for ethnicity (F=13.84, P=0.001). Although no group differences emerged for cold pain threshold and heat unpleasantness, South Asians demonstrated lower cold pain threshold and reported more unpleasantness at all temperatures but this was not statistically significant. Our study shows that ethnicity plays an important role in heat pain threshold and pain report, South Asian males demonstrated lower pain thresholds and higher pain report when compared with matched White British males. There were no differences in pain anxiety between the two groups and no correlations were identified between pain and pain anxiety Haemodynamic measures and anthropometry did not explain group differences.
Estimating sensitivity and specificity for technology assessment based on observer studies.
Nishikawa, Robert M; Pesce, Lorenzo L
2013-07-01
The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.
2017-01-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann
2013-01-01
Background There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2–5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS–Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2–5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4 percentage (CD4%) <25%. Methods and Findings ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm3 (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1–6.5) (no ART) to 2.1% (95% CI: 1.3%–3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%–3.5%) and 2.2% (95% CI: 1.4%–3.5%) after 3 y, respectively. The analysis was limited by loss to follow-up and the unavailability of WHO staging data. Conclusions The results indicate no mortality difference for up to 3 y between ART initiation irrespective of CD4 value and ART initiation at a threshold of CD4 count <750 cells/mm3 or CD4% <25%, but there are overall higher point estimates for mortality when ART is initiated at lower CD4 values. Please see later in the article for the Editors' Summary PMID:24260029
Lovvorn, James R.; De La Cruz, Susan; Takekawa, John Y.; Shaskey, Laura E.; Richman, Samantha E.
2013-01-01
Planning for marine conservation often requires estimates of the amount of habitat needed to support assemblages of interacting species. During winter in subtidal San Pablo Bay, California, the 3 main diving duck species are lesser scaup Aythya affinis (LESC), greater scaup A. marila (GRSC), and surf scoter Melanitta perspicillata (SUSC), which all feed almost entirely on the bivalve Corbula amurensis. Decreased body mass and fat, increased foraging effort, and major departures of these birds appeared to result from food limitation. Broad overlap in prey size, water depth, and location suggested that the 3 species responded similarly to availability of the same prey. However, an energetics model that accounts for differing body size, locomotor mode, and dive behavior indicated that each species will become limited at different stages of prey depletion in the order SUSC, then GRSC, then LESC. Depending on year, 35 to 66% of the energy in Corbula standing stocks was below estimated threshold densities for profitable foraging. Ectothermic predators, especially flounders and sturgeons, could reduce excess carrying capacity for different duck species by 4 to 10%. A substantial quantity of prey above profitability thresholds was not exploited before most ducks left San Pablo Bay. Such pre-depletion departure has been attributed in other taxa to foraging aggression. However, in these diving ducks that showed no overt aggression, this pattern may result from high costs of locating all adequate prey patches, resulting reliance on existing flocks to find food, and propensity to stay near dense flocks to avoid avian predation. For interacting species assemblages, modeling profitability thresholds can indicate the species most vulnerable to food declines. However, estimates of total habitat needed require better understanding of factors affecting the amount of prey above thresholds that is not depleted before the predators move elsewhere.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
Smith, Jennifer L; Sturrock, Hugh J W; Olives, Casey; Solomon, Anthony W; Brooker, Simon J
2013-01-01
Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF), generally collected using the recommended gold-standard cluster randomized surveys (CRS). Integrated Threshold Mapping (ITM) has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters. Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i) the district prevalence of TF; (ii) the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii) the enrollment rate in schools. Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates the use of a simulated approach in addressing operational research questions for trachoma but also other NTDs.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2018-01-01
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.
Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent
2005-01-01
Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Baker, Simon; Priest, Patricia; Jackson, Rod
2000-01-01
Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to lower blood pressure in primary care. PMID:10710577
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
NASA Astrophysics Data System (ADS)
Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao
2018-06-01
Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Estimation of risks by chemicals produced during laser pyrolysis of tissues
NASA Astrophysics Data System (ADS)
Weber, Lothar W.; Spleiss, Martin
1995-01-01
Use of laser systems in minimal invasive surgery results in formation of laser aerosol with volatile organic compounds of possible health risk. By use of currently identified chemical substances an overview on possibly associated risks to human health is given. The class of the different identified alkylnitriles seem to be a laser specific toxicological problem. Other groups of chemicals belong to the Maillard reaction type, the fatty acid pyrolysis type, or even the thermally activated chemolysis. In relation to the available different threshold limit values the possible exposure ranges of identified substances are discussed. A rough estimation results in an exposure range of less than 1/100 for almost all substances with given human threshold limit values without regard of possible interactions. For most identified alkylnitriles, alkenes, and heterocycles no threshold limit values are given for lack of, until now, practical purposes. Pyrolysis of anaesthetized organs with isoflurane gave no hints for additional pyrolysis products by fragment interactions with resulting VOCs. Measurements of pyrolysis gases resulted in detection of small amounts of NO additionally with NO2 formation at plasma status.
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-01-01
The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
NASA Astrophysics Data System (ADS)
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-09-01
The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
Rosen, Sophia; Davidov, Ori
2012-07-20
Multivariate outcomes are often measured longitudinally. For example, in hearing loss studies, hearing thresholds for each subject are measured repeatedly over time at several frequencies. Thus, each patient is associated with a multivariate longitudinal outcome. The multivariate mixed-effects model is a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, it is known that hearing thresholds, at every frequency, increase with age. Moreover, this age-related threshold elevation is monotone in frequency, that is, the higher the frequency, the higher, on average, is the rate of threshold elevation. This means that there is a natural ordering among the different frequencies in the rate of hearing loss. In practice, this amounts to imposing a set of constraints on the different frequencies' regression coefficients modeling the mean effect of time and age at entry to the study on hearing thresholds. The aforementioned constraints should be accounted for in the analysis. The result is a multivariate longitudinal model with restricted parameters. We propose estimation and testing procedures for such models. We show that ignoring the constraints may lead to misleading inferences regarding the direction and the magnitude of various effects. Moreover, simulations show that incorporating the constraints substantially improves the mean squared error of the estimates and the power of the tests. We used this methodology to analyze a real hearing loss study. Copyright © 2012 John Wiley & Sons, Ltd.
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
Psychophysical estimation of the effects of aging on direction-of-heading judgments
NASA Astrophysics Data System (ADS)
Raghuram, Aparna; Lakshminarayanan, Vasudevan
2011-11-01
We conducted psychophysical experiments on direction-of-heading judgments using old and young subjects. Subjects estimated heading directions on a translation perpendicular to the vertical plane (frontoparallel); we found that heading judgments were affected by age. Increasing the random dot density in the stimulus from 24 to 400 dots did not improve threshold significantly. Older subjects started performing worse at the highest dots condition of 400. The speed of the radial motion was important, as heading judgments with slower radial motion were difficult to judge than faster radial motion, as the focus of expansion was easier to locate owing to the larger displacement of dots. Gender differences indicated that older women had a higher threshold than older men. This was only significant for the faster simulated radial speed. A general trend of women having a higher threshold than men was noticed.
Zhao, Tuo; Liu, Han
2016-01-01
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430
NASA Astrophysics Data System (ADS)
Balaguer-Puig, Matilde; Marqués-Mateu, Ángel; Lerma, José Luis; Ibáñez-Asensio, Sara
2017-10-01
The quantitative estimation of changes in terrain surfaces caused by water erosion can be carried out from precise descriptions of surfaces given by means of digital elevation models (DEMs). Some stages of water erosion research efforts are conducted in the laboratory using rainfall simulators and soil boxes with areas less than 1 m2. Under these conditions, erosive processes can lead to very small surface variations and high precision DEMs are needed to account for differences measured in millimetres. In this paper, we used a photogrammetric Structure from Motion (SfM) technique to build DEMs of a 0.5 m2 soil box to monitor several simulated rainfall episodes in the laboratory. The technique of DEM of difference (DoD) was then applied using GIS tools to compute estimates of volumetric changes between each pair of rainfall episodes. The aim was to classify the soil surface into three classes: erosion areas, deposition areas, and unchanged or neutral areas, and quantify the volume of soil that was eroded and deposited. We used a thresholding criterion of changes based on the estimated error of the difference of DEMs, which in turn was obtained from the root mean square error of the individual DEMs. Experimental tests showed that the choice of different threshold values in the DoD can lead to volume differences as large as 60% when compared to the direct volumetric difference. It turns out that the choice of that threshold was a key point in this method. In parallel to photogrammetric work, we collected sediments from each rain episode and obtained a series of corresponding measured sediment yields. The comparison between computed and measured sediment yields was significantly correlated, especially when considering the accumulated value of the five simulations. The computed sediment yield was 13% greater than the measured sediment yield. The procedure presented in this paper proved to be suitable for the determination of sediment yields in rainfall-driven soil erosion experiments conducted in the laboratory.
Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.
2013-01-01
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479
Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A
2013-10-22
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.
Reasoning in psychosis: risky but not necessarily hasty.
Moritz, Steffen; Scheu, Florian; Andreou, Christina; Pfueller, Ute; Weisbrod, Matthias; Roesch-Ely, Daniela
2016-01-01
A liberal acceptance (LA) threshold for hypotheses has been put forward to explain the well-replicated "jumping to conclusions" (JTC) bias in psychosis, particularly in patients with paranoid symptoms. According to this account, schizophrenia patients rest their decisions on lower subjective probability estimates. The initial formulation of the LA account also predicts an absence of the JTC bias under high task ambiguity (i.e., if more than one response option surpasses the subjective acceptance threshold). Schizophrenia patients (n = 62) with current or former delusions and healthy controls (n = 30) were compared on six scenarios of a variant of the beads task paradigm. Decision-making was assessed under low and high task ambiguity. Along with decision judgments (optional), participants were required to provide probability estimates for each option in order to determine decision thresholds (i.e., the probability the individual deems sufficient for a decision). In line with the LA account, schizophrenia patients showed a lowered decision threshold compared to controls (82% vs. 93%) which predicted both more errors and less draws to decisions. Group differences on thresholds were comparable across conditions. At the same time, patients did not show hasty decision-making, reflecting overall lowered probability estimates in patients. Results confirm core predictions derived from the LA account. Our results may (partly) explain why hasty decision-making is sometimes aggravated and sometimes abolished in psychosis. The proneness to make risky decisions may contribute to the pathogenesis of psychosis. A revised LA account is put forward.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
Meta-analysis of diagnostic accuracy studies in mental health
Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J
2015-01-01
Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042
Pfiffner, Flurin; Kompis, Martin; Stieger, Christof
2009-10-01
To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
A robust threshold-based cloud mask for the HRV channel of MSG SEVIRI
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-03-01
A robust threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the METEOSAT SEVIRI instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures which cannot be detected by the low resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behaviour for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test dataset depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as estimate of cloud fraction.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J
2018-03-28
To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Spatial distribution of threshold wind speeds for dust outbreaks in northeast Asia
NASA Astrophysics Data System (ADS)
Kimura, Reiji; Shinoda, Masato
2010-01-01
Asian windblown dust events cause human and animal health effects and agricultural damage in dust source areas such as China and Mongolia and cause "yellow sand" events in Japan and Korea. It is desirable to develop an early warning system to help prevent such damage. We used our observations at a Mongolian station together with data from previous studies to model the spatial distribution of threshold wind speeds for dust events in northeast Asia (35°-45°N and 100°-115°E). Using a map of Normalized Difference Vegetation Index (NDVI), we estimated spatial distributions of vegetation cover, roughness length, threshold friction velocity, and threshold wind speed. We also recognized a relationship between NDVI in the dust season and maximum NDVI in the previous year. Thus, it may be possible to predict the threshold wind speed in the next dust season using the maximum NDVI in the previous year.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Zhou, Ning
2017-03-01
The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko
2018-05-31
The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.
A cocktail-party listening experiment with children
NASA Astrophysics Data System (ADS)
Wightman, Frederic; Callahan, Michael; Kistler, Doris
2003-04-01
In an experiment modeled after one reported recently by Brungart and Simpson [J. Acoust. Soc. Am. 112, 2985-2995 (2002)], 38 children (ages 4-16) and 10 adults responded to a monaural target speech signal in the presence of one or two distracter speech signals. The target speaker was a male and the distracter speakers were females. When two distracters were present they were in different ears. Performance at several different target ear S/N was measured and psychometric functions were fitted to estimate threshold, or the 50% performance level. The youngest children required approximately 20 dB higher S/N than adults to achieve threshold with a single distracter. This difference disappeared by age 16. The impact of adding the contralateral distracter, which is thought to contribute only informational masking, was roughly constant across age, however. Adult thresholds increased about 11 dB and the thresholds for the youngest children increased about 10 dB. This was surprising given previous experiments that showed much larger informational masking effects in young children. Also inconsistent with previous results is the lack of individual differences. Nearly all listeners showed almost the same contralateral distracter effect. [Work supported by NICHD.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
At what costs will screening with CT colonography be competitive? A cost-effectiveness approach.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Zauber, Ann G; Boer, Rob; Wilschut, Janneke; Habbema, J Dik F
2009-03-01
The costs of computed tomographic colonography (CTC) are not yet established for screening use. In our study, we estimated the threshold costs for which CTC screening would be a cost-effective alternative to colonoscopy for colorectal cancer (CRC) screening in the general population. We used the MISCAN-colon microsimulation model to estimate the costs and life-years gained of screening persons aged 50-80 years for 4 screening strategies: (i) optical colonoscopy; and CTC with referral to optical colonoscopy of (ii) any suspected polyp; (iii) a suspected polyp >or=6 mm and (iv) a suspected polyp >or=10 mm. For each of the 4 strategies, screen intervals of 5, 10, 15 and 20 years were considered. Subsequently, for each CTC strategy and interval, the threshold costs of CTC were calculated. We performed a sensitivity analysis to assess the effect of uncertain model parameters on the threshold costs. With equal costs ($662), optical colonoscopy dominated CTC screening. For CTC to gain similar life-years as colonoscopy screening every 10 years, it should be offered every 5 years with referral of polyps >or=6 mm. For this strategy to be as cost-effective as colonoscopy screening, the costs must not exceed $285 or 43% of colonoscopy costs (range in sensitivity analysis: 39-47%). With 25% higher adherence than colonoscopy, CTC threshold costs could be 71% of colonoscopy costs. Our estimate of 43% is considerably lower than previous estimates in literature, because previous studies only compared CTC screening to 10-yearly colonoscopy, where we compared to different intervals of colonoscopy screening.
Validity of Lactate Thresholds in Inline Speed Skating.
Hecksteden, Anne; Heinze, Tobias; Faude, Oliver; Kindermann, Wilfried; Meyer, Tim
2015-09-01
Lactate thresholds are commonly used as estimates of the highest workload where lactate production and elimination are in equilibrium (maximum lactate steady state [MLSS]). However, because of the high static load on propulsive muscles, lactate kinetics in inline speed skating may differ significantly from other endurance exercise modes. Therefore, the discipline-specific validity of lactate thresholds has to be verified. Sixteen competitive inline-speed skaters (age: 30 ± 10 years; training per week: 10 ± 4 hours) completed an exhaustive stepwise incremental exercise test (start 24 km·h, step duration 3 minutes, increment 2 km·h) to determine individual anaerobic threshold (IAT) and the workload corresponding to a blood lactate concentration of 4 mmol·L (LT4) and 2-5 continuous load tests of (up to) 30 minutes to determine MLSS. The IAT and LT4 correlated significantly with MLSS, and the mean differences were almost negligible (MLSS 29.5 ± 2.5 km·h; IAT 29.2 ± 2.0 km·h; LT4 29.6 ± 2.3 km·h; p > 0.1 for all differences). However, the variability of differences was considerable resulting in 95% limits of agreement in the upper range of values known from other endurance disciplines (2.6 km·h [8.8%] for IAT and 3.1 km·h [10.3%] for LT4). Consequently, IAT and LT4 may be considered as valid estimates of the MLSS in inline speed skating, but verification by means of a constant load test should be considered in cases of doubt or when optimal accuracy is needed (e.g., in elite athletes or scientific studies).
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Clayton, Hilary M.
2015-01-01
The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641
Fast simulation of packet loss rates in a shared buffer communications switch
NASA Technical Reports Server (NTRS)
Chang, Cheng-Shang; Heidelberger, Philip; Shahabuddin, Perwez
1993-01-01
This paper describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.
NASA Astrophysics Data System (ADS)
Kleinherenbrink, Marcel; Riva, Riccardo; Frederikse, Thomas
2018-03-01
Tide gauge (TG) records are affected by vertical land motion (VLM), causing them to observe relative instead of geocentric sea level. VLM can be estimated from global navigation satellite system (GNSS) time series, but only a few TGs are equipped with a GNSS receiver. Hence, (multiple) neighboring GNSS stations can be used to estimate VLM at the TG. This study compares eight approaches to estimate VLM trends at 570 TG stations using GNSS by taking into account all GNSS trends with an uncertainty smaller than 1 mm yr-1 within 50 km. The range between the methods is comparable with the formal uncertainties of the GNSS trends. Taking the median of the surrounding GNSS trends shows the best agreement with differenced altimetry-tide gauge (ALT-TG) trends. An attempt is also made to improve VLM trends from ALT-TG time series. Only using highly correlated along-track altimetry and TG time series reduces the SD of ALT-TG time series by up to 10 %. As a result, there are spatially coherent changes in the trends, but the reduction in the root mean square (RMS) of differences between ALT-TG and GNSS trends is insignificant. However, setting correlation thresholds also acts like a filter to remove problematic TG time series. This results in sets of ALT-TG VLM trends at 344-663 TG locations, depending on the correlation threshold. Compared to other studies, we decrease the RMS of differences between GNSS and ALT-TG trends (from 1.47 to 1.22 mm yr-1), while we increase the number of locations (from 109 to 155), Depending on the methods the mean of differences between ALT-TG and GNSS trends vary between 0.1 and 0.2 mm yr-1. We reduce the mean of the differences by taking into account the effect of elastic deformation due to present-day mass redistribution. At varying ALT-TG correlation thresholds, we provide new sets of trends for 759 to 939 different TG stations. If both GNSS and ALT-TG trend estimates are available, we recommend using the GNSS trend estimates because residual ocean signals might correlate over long distances. However, if large discrepancies ( > 3 mm yr-1) between the two methods are present, local VLM differences between the TG and the GNSS station are likely the culprit and therefore it is better to take the ALT-TG trend estimate. GNSS estimates for which only a single GNSS station and no ALT-TG estimate are available might still require some inspection before they are used in sea level studies.
The impact of cochlear fine structure on hearing thresholds and DPOAE levels
NASA Astrophysics Data System (ADS)
Lee, Jungmee; Long, Glenis; Talmadge, Carrick L.
2004-05-01
Although otoacoustic emissions (OAE) are used as clinical and research tools, the correlation between OAE behavioral estimates of hearing status is not large. In normal-hearing individuals, the level of OAEs can vary as much as 30 dB when the frequency is changed less than 5%. These pseudoperiodic variations of OAE level with frequency are known as fine structure. Hearing thresholds measured with high-frequency resolution reveals a similar (up to 15 dB) fine structure. We examine the impact of OAE and threshold fine structures on the prediction of auditory thresholds from OAE levels. Distortion product otoacoustic emissions (DPOAEs) were measured with sweeping primary tones. Psychoacoustic detection thresholds were measured using pure tones, sweep tones, FM tones, and narrow-band noise. Sweep DPOAE and narrow-band threshold estimates provide estimates that are less influenced by cochlear fine structure and should lead to a higher correlation between OAE levels and psychoacoustic thresholds. [Research supported by PSC CUNY, NIDCD, National Institute on Disability and Rehabilitation Research in U.S. Department of Education, and The Ministry of Education in Korea.
Assessment of Maximum Aerobic Capacity and Anaerobic Threshold of Elite Ballet Dancers.
Wyon, Matthew A; Allen, Nick; Cloak, Ross; Beck, Sarah; Davies, Paul; Clarke, Frances
2016-09-01
An athlete's cardiorespiratory profile, maximal aerobic capacity, and anaerobic threshold is affected by training regimen and competition demands. The present study aimed to ascertain whether there are company rank differences in maximal aerobic capacity and anaerobic threshold in elite classical ballet dancers. Seventy-four volunteers (M 34, F 40) were recruited from two full-time professional classical ballet companies. All participants completed a continuous incremental treadmill protocol with a 1-km/hr speed increase at the end of each 1-min stage until termination criteria had been achieved (e.g., voluntary cessation, respiratory exchange ratio <1.15, HR ±5 bpm of estimated HRmax). Peak VO2 (5-breathe smooth) was recorded and anaerobic threshold calculated using ventilatory curve and ventilatory equivalents methods. Statistical analysis reported between-subject effects for gender (F1,67=35.18, p<0.001) and rank (F1,67=8.67, p<0.001); post hoc tests reported soloists (39.5±5.15 mL/kg/min) as having significantly lower VO2 peak than artists (45.9±5.75 mL/kg/min, p<0.001) and principal dancers (48.07±3.24 mL/kg/min, p<0.001). Significant differences in anaerobic threshold were reported for age (F1,67=7.68, p=0.008) and rank (F1,67=3.56, p=0.034); post hoc tests reported artists (75.8±5.45%) having significantly lower anaerobic threshold than soloists (80.9±5.71, p<0.01) and principals (84.1±4.84%, p<0.001). The observed differences in VO2 peak and anaerobic threshold between the ranks in ballet companies are probably due to the different rehearsal and performance demands.
Dental age estimation: the role of probability estimates at the 10 year threshold.
Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham
2014-08-01
The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.
Holzgreve, Adrien; Brendel, Matthias; Gu, Song; Carlsen, Janette; Mille, Erik; Böning, Guido; Mastrella, Giorgia; Unterrainer, Marcus; Gildehaus, Franz J; Rominger, Axel; Bartenstein, Peter; Kälin, Roland E; Glass, Rainer; Albert, Nathalie L
2016-01-01
Noninvasive tumor growth monitoring is of particular interest for the evaluation of experimental glioma therapies. This study investigates the potential of positron emission tomography (PET) using O-(2-(18)F-fluoroethyl)-L-tyrosine ([(18)F]-FET) to determine tumor growth in a murine glioblastoma (GBM) model-including estimation of the biological tumor volume (BTV), which has hitherto not been investigated in the pre-clinical context. Fifteen GBM-bearing mice (GL261) and six control mice (shams) were investigated during 5 weeks by PET followed by autoradiographic and histological assessments. [(18)F]-FET PET was quantitated by calculation of maximum and mean standardized uptake values within a universal volume-of-interest (VOI) corrected for healthy background (SUVmax/BG, SUVmean/BG). A partial volume effect correction (PVEC) was applied in comparison to ex vivo autoradiography. BTVs obtained by predefined thresholds for VOI definition (SUV/BG: ≥1.4; ≥1.6; ≥1.8; ≥2.0) were compared to the histologically assessed tumor volume (n = 8). Finally, individual "optimal" thresholds for BTV definition best reflecting the histology were determined. In GBM mice SUVmax/BG and SUVmean/BG clearly increased with time, however at high inter-animal variability. No relevant [(18)F]-FET uptake was observed in shams. PVEC recovered signal loss of SUVmean/BG assessment in relation to autoradiography. BTV as estimated by predefined thresholds strongly differed from the histology volume. Strikingly, the individual "optimal" thresholds for BTV assessment correlated highly with SUVmax/BG (ρ = 0.97, p < 0.001), allowing SUVmax/BG-based calculation of individual thresholds. The method was verified by a subsequent validation study (n = 15, ρ = 0.88, p < 0.01) leading to extensively higher agreement of BTV estimations when compared to histology in contrast to predefined thresholds. [(18)F]-FET PET with standard SUV measurements is feasible for glioma imaging in the GBM mouse model. PVEC is beneficial to improve accuracy of [(18)F]-FET PET SUV quantification. Although SUVmax/BG and SUVmean/BG increase during the disease course, these parameters do not correlate with the respective tumor size. For the first time, we propose a histology-verified method allowing appropriate individual BTV estimation for volumetric in vivo monitoring of tumor growth with [(18)F]-FET PET and show that standardized thresholds from routine clinical practice seem to be inappropriate for BTV estimation in the GBM mouse model.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Tactile Acuity Charts: A Reliable Measure of Spatial Acuity
Bruns, Patrick; Camargo, Carlos J.; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R.; Röder, Brigitte
2014-01-01
For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds. PMID:24504346
Estimating phonation threshold pressure.
Fisher, K V; Swank, P R
1997-10-01
Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.
Lactate threshold by muscle electrical impedance in professional rowers
NASA Astrophysics Data System (ADS)
Jotta, B.; Coutinho, A. B. B.; Pino, A. V.; Souza, M. N.
2017-04-01
Lactate threshold (LT) is one of the physiological parameters usually used in rowing sport training prescription because it indicates the transitions from aerobic to anaerobic metabolism. Assessment of LT is classically based on a series of values of blood lactate concentrations obtained during progressive exercise tests and thus has an invasive aspect. The feasibility of noninvasive LT estimative through bioelectrical impedance spectroscopy (BIS) data collected in thigh muscles during rowing ergometer exercise tests was investigated. Nineteen professional rowers, age 19 (mean) ± 4.8 (standard deviation) yr, height 187.3 ± 6.6 cm, body mass 83 ± 7.7 kg, and training experience of 7 ± 4 yr, were evaluated in a rowing ergometer progressive test with paired measures of blood lactate concentration and BIS in thigh muscles. Bioelectrical impedance data were obtained by using a bipolar method of spectroscopy based on the current response to a voltage step. An electrical model was used to interpret BIS data and to derive parameters that were investigated to estimate LT noninvasively. From the serial blood lactate measurements, LT was also determined through Dmax method (LTDmax). The zero crossing of the second derivative of kinetic of the capacitance electrode (Ce), one of the BIS parameters, was used to estimate LT. The agreement between the LT estimates through BIS (LTBIS) and through Dmax method (LTDmax) was evaluated using Bland-Altman plots, leading to a mean difference between the estimates of just 0.07 W and a Pearson correlation coefficient r = 0.85. This result supports the utilization of the proposed method based on BIS parameters for estimating noninvasively the lactate threshold in rowing.
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph
2013-01-01
There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.
NASA Astrophysics Data System (ADS)
Parravicini, Paola; Cislaghi, Matteo; Condemi, Leonardo
2017-04-01
ARPA Lombardia is the Environmental Protection Agency of Lombardy, a wide region in the North of Italy. ARPA is in charge of river monitoring either for Civil Protection or water balance purposes. It cooperates with the Civil Protection Agency of Lombardy (RL-PC) in flood forecasting and early warning. The early warning system is based on rainfall and discharge thresholds: when a threshold exceeding is expected, RL-PC disseminates an alert from yellow to red. The conventional threshold evaluation is based on events at a fixed return period. Anyway, the impacts of events with the same return period may be different along the river course due to the specific characteristics of the affected areas. A new approach is introduced. It defines different scenarios, corresponding to different flood impacts. A discharge threshold is then associated to each scenario and the return period of the scenario is computed backwards. Flood scenarios are defined in accordance with National Civil Protection guidelines, which describe the expected flood impact and associate a colour to the scenario from green (no relevant effects) to red (major floods). A range of discharges is associated with each scenario since they cause the same flood impact; the threshold is set as the discharge corresponding to the transition between two scenarios. A wide range of event-based information is used to estimate the thresholds. As first guess, the thresholds are estimated starting from hydraulic model outputs and the people or infrastructures flooded according to the simulations. Eventually the model estimates are validated with real event knowledge: local Civil Protection Emergency Plans usually contain very detailed local impact description at known river levels or discharges, RL-PC collects flooding information notified by the population, newspapers often report flood events on web, data from the river monitoring network provide evaluation of actually happened levels and discharges. The methodology allows to give a return period for each scenario. The return period may vary along the river course according to the discharges associated with the scenario. The values of return period may show the areas characterized by higher risk and can be an important basis for civil protection emergency planning and river monitoring. For example, considering the Lambro River, the red scenario (major flood) shows a return period of 50 years in the northern rural part of the catchment. When the river crosses the city of Milan, the return period drops to 4 years. Afterwards it goes up to more than 100 years when the river flows in the agricultural areas in the southern part of the catchment. In addition, the knowledge gained with event-based analysis allows evaluating the compliance of the monitoring network with early warning requirements and represents the starting point for further development of the network itself.
The development of rating of perceived exertion-based tests of physical working capacity.
Mielke, Michelle; Housh, Terry J; Malek, Moh H; Beck, Travis W; Schmidt, Richard J; Johnson, Glen O
2008-01-01
The purpose of the present study was to use ratings of perceived exertion (RPE) from the Borg (6-20) and OMNI-Leg (0-10) scales to determine the Physical Working Capacity at the Borg and OMNI thresholds (PWC(BORG) and PWC(OMNI)). PWC(BORG) and PWC(OMNI) were compared with other fatigue thresholds determined from the measurement of heart rate (the Physical Working Capacity at the Heart Rate Threshold: PWC(HRT)), and oxygen consumption (the Physical Working Capacity at the Oxygen Consumption Threshold, PWC(VO2)), as well as the ventilatory threshold (VT). Fifteen men and women volunteers (mean age +/- SD = 22 +/- 1 years) performed an incremental test to exhaustion on an electronically braked ergometer for the determination of VO2 peak and VT. The subjects also performed 4 randomly ordered workbouts to exhaustion at different power outputs (ranging from 60 to 206W) for the determination of PWC(BORG), PWC(OMNI), PWC(HRT), and PWC(VO2). The results indicated that there were no significant mean differences among the fatigue thresholds: PWC(BORG) (mean +/- SD = 133 +/- 37W; 67 +/- 8% of VO2 peak), PWC(OMNI) (137 +/- 44W; 68 +/- 9% of VO2 peak), PWC(HRT) (135 +/- 36W; 68 +/- 8% of VO2 peak), PWC(VO2) (145 +/- 41W; 72 +/- 7% of VO2 peak) and VT (131 +/- 45W; 66 +/- 8% of VO2 peak). The results of this study indicated that the mathematical model used to estimate PWC(HRT) and PWC(VO2) can be applied to ratings of perceived exertion to determine PWC(BORG) and PWC(OMNI) during cycle ergometry. Salient features of the PWC(BORG) and PWC(OMNI) tests are that they are simple to administer and require the use of only an RPE scale, a stopwatch, and a cycle ergometer. Furthermore, the power outputs at the PWC(BORG) and PWC(OMNI) may be useful to estimate the VT noninvasively and without the need for expired gas analysis.
Fusaro, Mario V; Nielsen, Nathan D; Nielsen, Alexandra; Fontaine, Magali J; Hess, John R; Reed, Robert M; DeLisle, Sylvain; Netzer, Giora
2017-02-01
Red blood cell transfusion related to select surgical procedures accounts for approximately 2.8 million transfusions in the United States yearly and occurs commonly after hip fracture surgeries. Randomized controlled trials have demonstrated lack of clinical benefit with higher versus lower transfusion thresholds in postoperative hip fracture repair patients with cardiac disease or risk factors for cardiac disease. The economic implications of a higher versus lower hemoglobin (Hb) threshold have not yet been investigated. A decision tree analysis was constructed to estimate differences in healthcare costs and charges between a Hb transfusion threshold strategy of 8 g/dL versus 10 g/dL from the perspective of both Centers for Medicare and Medicaid Services (CMS) as well as hospitals. Secondary outcome measures included differences in transfusion-related adverse events. Among the 133,697 Medicare beneficiaries undergoing hip fracture repair in 2012, we estimated that 45,457 patients would be anemic and at risk for transfusion. CMS would save an estimated $11.3 million to $24.3 million in payments, while hospitals would reduce charges by an estimated $52.7 million to $93.6 million if the restrictive transfusion strategy were to be implemented nationally. Additionally, rates of transfusion-associated circulatory overload, transfusion-related acute lung injury, acute transfusion reactions, length of stay, and mortality would be reduced. This model suggests that the uniform adoption of a restrictive transfusion strategy among patients with cardiac disease and risk factors for cardiac disease undergoing hip fracture repair would result in significant reductions in clinically important outcomes with significant cost savings. © 2016 AABB.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E
2011-06-01
Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.
Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.
Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew
2018-01-01
Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Diffusion amid random overlapping obstacles: Similarities, invariants, approximations
Novak, Igor L.; Gao, Fei; Kraikivski, Pavel; Slepchenko, Boris M.
2011-01-01
Efficient and accurate numerical techniques are used to examine similarities of effective diffusion in a void between random overlapping obstacles: essential invariance of effective diffusion coefficients (Deff) with respect to obstacle shapes and applicability of a two-parameter power law over nearly entire range of excluded volume fractions (ϕ), except for a small vicinity of a percolation threshold. It is shown that while neither of the properties is exact, deviations from them are remarkably small. This allows for quick estimation of void percolation thresholds and approximate reconstruction of Deff (ϕ) for obstacles of any given shape. In 3D, the similarities of effective diffusion yield a simple multiplication “rule” that provides a fast means of estimating Deff for a mixture of overlapping obstacles of different shapes with comparable sizes. PMID:21513372
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes.
Sales, Marcelo M; Campbell, Carmen Sílvia G; Morais, Pâmella K; Ernesto, Carlos; Soares-Caldeira, Lúcio F; Russo, Paulo; Motta, Daisy F; Moreira, Sérgio R; Nakamura, Fábio Y; Simões, Herbert G
2011-01-12
While several studies have identified the anaerobic threshold (AT) through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV) as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT) and ventilatory threshold (VT) for individuals with type 2 diabetes (T2D) has not been investigated yet. To analyze the possibility of identifying the heart rate variability threshold (HRVT) by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D) and non diabetics individuals (ND). Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1)) and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1)) underwent to an incremental exercise test (IT) on a cycle ergometer. Heart rate (HR), rate of perceived exertion (RPE), blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD) and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1) by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1), respectively. No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87) and ND (r = 0.91 to 0.98) and the Bland & Altman technique confirmed the agreement among them. The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD) were demonstrated to be valid to estimate the LT and VT for both T2D and ND.
Validated Automatic Brain Extraction of Head CT Images
Muschelli, John; Ullman, Natalie L.; Mould, W. Andrew; Vespa, Paul; Hanley, Daniel F.; Crainiceanu, Ciprian M.
2015-01-01
Background X-ray Computed Tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction – the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. Aim To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. Methods All images were thresholded using a 0 – 100 Hounsfield units (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ = 1mm3) and re-thresholded to 0 – 100 HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled. For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomatedICVmanual. To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and and an intraclass correlation (ICC) was estimated using a one-way ANOVA. Results Smoothing images improves brain extraction results using BET for all measures except specificity (all p < 0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p < 0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p < 0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p < 0.001) and lower specificity (0.9981 vs. 0.9982; median difference: −0.0001, p < 0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation. In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. Conclusion BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided. PMID:25862260
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-01-01
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries. PMID:26314637
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-08-28
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries.
Leimbach, Friederike; Georgiev, Dejan; Litvak, Vladimir; Antoniades, Chrystalina; Limousin, Patricia; Jahanshahi, Marjan; Bogacz, Rafal
2018-06-01
During a decision process, the evidence supporting alternative options is integrated over time, and the choice is made when the accumulated evidence for one of the options reaches a decision threshold. Humans and animals have an ability to control the decision threshold, that is, the amount of evidence that needs to be gathered to commit to a choice, and it has been proposed that the subthalamic nucleus (STN) is important for this control. Recent behavioral and neurophysiological data suggest that, in some circumstances, the decision threshold decreases with time during choice trials, allowing overcoming of indecision during difficult choices. Here we asked whether this within-trial decrease of the decision threshold is mediated by the STN and if it is affected by disrupting information processing in the STN through deep brain stimulation (DBS). We assessed 13 patients with Parkinson disease receiving bilateral STN DBS six or more months after the surgery, 11 age-matched controls, and 12 young healthy controls. All participants completed a series of decision trials, in which the evidence was presented in discrete time points, which allowed more direct estimation of the decision threshold. The participants differed widely in the slope of their decision threshold, ranging from constant threshold within a trial to steeply decreasing. However, the slope of the decision threshold did not depend on whether STN DBS was switched on or off and did not differ between the patients and controls. Furthermore, there was no difference in accuracy and RT between the patients in the on and off stimulation conditions and healthy controls. Previous studies that have reported modulation of the decision threshold by STN DBS or unilateral subthalamotomy in Parkinson disease have involved either fast decision-making under conflict or time pressure or in anticipation of high reward. Our findings suggest that, in the absence of reward, decision conflict, or time pressure for decision-making, the STN does not play a critical role in modulating the within-trial decrease of decision thresholds during the choice process.
Non-invasive indices for the estimation of the anaerobic threshold of oarsmen.
Erdogan, A; Cetin, C; Karatosun, H; Baydar, M L
2010-01-01
This study compared four common non-invasive indices with an invasive index for determining the anaerobic threshold (AT) in 22 adult male rowers using a Concept2 rowing ergometer. A criterion-standard progressive incremental test (invasive method) measured blood lactate concentrations to determine the 4 mmol/l threshold (La4-AT) and Dmax AT (Dm-AT). This was compared with three indices obtained by analysis of respiratory gases and one that was based on the heart rate (HR) deflection point (HRDP) all of which used the Conconi test (non-invasive methods). In the Conconi test, the HRDP was determined whilst continuously increasing the power output (PO) by 25 W/min and measuring respiratory gases and HR. The La4-AT and Dm-AT values differed slightly with respect to oxygen uptake, PO and HR however, AT values significantly correlated with each other and with the four non-invasive methods. In conclusion, the non-invasive indices were comparable with the invasive index and could, therefore, be used in the assessment of AT during rowing ergometer use. In this population of elite rowers, Conconi threshold (Con-AT), based on the measurement of HRDP tended to be the most adequate way of estimating AT for training regulation purposes.
Kohli, Preeti; Storck, Kristina A.; Schlosser, Rodney J.
2016-01-01
Differences in testing modalities and cut-points used to define olfactory dysfunction contribute to the wide variability in estimating the prevalence of olfactory dysfunction in chronic rhinosinusitis (CRS). The aim of this study is to report the prevalence of olfactory impairment using each component of the Sniffin’ Sticks test (threshold, discrimination, identification, and total score) with age-adjusted and ideal cut-points from normative populations. Patients meeting diagnostic criteria for CRS were enrolled from rhinology clinics at a tertiary academic center. Olfaction was assessed using the Sniffin’ Sticks test. The study population consisted of 110 patients. The prevalence of normosmia, hyposmia, and anosmia using total Sniffin’ Sticks score was 41.8%, 20.0%, and 38.2% using age-appropriate cut-points and 20.9%, 40.9%, and 38.2% using ideal cut-points. Olfactory impairment estimates for each dimension mirrored these findings, with threshold yielding the highest values. Threshold, discrimination, and identification were also found to be significantly correlated to each other (P < 0.001). In addition, computed tomography scores, asthma, allergy, and diabetes were found to be associated with olfactory dysfunction. In conclusion, the prevalence of olfactory dysfunction is dependent upon olfactory dimension and if age-adjusted cut-points are used. The method of olfactory testing should be chosen based upon specific clinical and research goals. PMID:27469973
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
Li, Jing; Blakeley, Daniel; Smith?, Robert J.
2011-01-01
The basic reproductive ratio, R 0, is one of the fundamental concepts in mathematical biology. It is a threshold parameter, intended to quantify the spread of disease by estimating the average number of secondary infections in a wholly susceptible population, giving an indication of the invasion strength of an epidemic: if R 0 < 1, the disease dies out, whereas if R 0 > 1, the disease persists. R 0 has been widely used as a measure of disease strength to estimate the effectiveness of control measures and to form the backbone of disease-management policy. However, in almost every aspect that matters, R 0 is flawed. Diseases can persist with R 0 < 1, while diseases with R 0 > 1 can die out. We show that the same model of malaria gives many different values of R 0, depending on the method used, with the sole common property that they have a threshold at 1. We also survey estimated values of R 0 for a variety of diseases, and examine some of the alternatives that have been proposed. If R 0 is to be used, it must be accompanied by caveats about the method of calculation, underlying model assumptions and evidence that it is actually a threshold. Otherwise, the concept is meaningless. PMID:21860658
Coupé, Veerle M. H.; Knottnerus, Bart J.; Geerlings, Suzanne E.; Moll van Charante, Eric P.; ter Riet, Gerben
2017-01-01
Background Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. Objective To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. Methods This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Results Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Conclusion Depending on decision makers’ willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI. PMID:29186185
Bosmans, Judith E; Coupé, Veerle M H; Knottnerus, Bart J; Geerlings, Suzanne E; Moll van Charante, Eric P; Ter Riet, Gerben
2017-01-01
Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Depending on decision makers' willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI.
Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold
NASA Technical Reports Server (NTRS)
Srivastava, M. K.; Temkin, A.
1991-01-01
The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.
Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2011-06-01
A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.
Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B
2015-11-19
There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Climate Change, Population Immunity, and Hyperendemicity in the Transmission Threshold of Dengue
Oki, Mika; Yamamoto, Taro
2012-01-01
Background It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R0), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R0 value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist–a state known as hyperendemicity–and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R0. We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. Methods and Findings We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. Conclusions The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas. PMID:23144746
Climate change, population immunity, and hyperendemicity in the transmission threshold of dengue.
Oki, Mika; Yamamoto, Taro
2012-01-01
It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R(0)), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R(0) value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist-a state known as hyperendemicity-and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R(0). We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas.
NASA Astrophysics Data System (ADS)
Kim, Y.; Du, J.; Kimball, J. S.
2017-12-01
The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.
Gerhardsson, Lars; Balogh, Istvan; Hambert, Per-Arne; Hjortsberg, Ulf; Karlsson, Jan-Erik
2005-01-01
The aim of the present study was to compare the development of vibration white fingers (VWF) in workers in relation to different ways of exposure estimation, and their relationship to the standard ISO 5349, annex A. Nineteen vibration exposed (grinding machines) male workers completed a questionnaire followed by a structured interview including questions regarding their estimated hand-held vibration exposure. Neurophysiological tests such as fractionated nerve conduction velocity in hands and arms, vibrotactile perception thresholds and temperature thresholds were determined. The subjective estimation of the mean daily exposure-time to vibrating tools was 192 min (range 18-480 min) among the workers. The estimated mean exposure time calculated from the consumption of grinding wheels was 42 min (range 18-60 min), approximately a four-fold overestimation (Wilcoxon's signed ranks test, p<0.001). Thus, objective measurements of the exposure time, related to the standard ISO 5349, which in this case were based on the consumption of grinding wheels, will in most cases give a better basis for adequate risk assessment than self-exposure assessment.
Guo, J; Booth, M; Jenkins, J; Wang, H; Tanner, M
1998-12-01
The World Bank Loan Project for schistosomiasis in China commenced field activities in 1992. In this paper, we describe disease control strategies for levels of different endemicity, and estimate unit costs and total expenditure of screening, treatment (cattle and humans) and snail control for 8 provinces where Schistosoma japonicum infection is endemic. Overall, we estimate that more than 21 million US dollars were spent on field activities during the first three years of the project. Mollusciciding (43% of the total expenditure) and screening (28% of the total) are estimated to have the most expensive field activities. However, despite the expense of screening, a simple model predicts that selective chemotherapy could have been cheaper than mass chemotherapy in areas where infection prevalence was higher than 15%, which was the threshold for mass chemotherapy intervention. It is concluded that considerable cost savings could be made in the future by narrowing the scope of snail control activities, redefining the threshold infection prevalence for mass chemotherapy, defining smaller administrative units, and developing rapid assessment tools.
Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.
2010-01-01
Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.
A novel approach to estimation of the time to biomarker threshold: applications to HIV.
Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc
2016-11-01
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimation of Effect Thresholds for the Development of Water Quality Criteria
Biological and ecological effect thresholds can be used for determining safe levels of nontraditional stressors. The U.S. EPA Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (WQC) [36] uses a risk assessment approach to estimate effect thre...
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.
Code of Federal Regulations, 2012 CFR
2012-10-01
... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...
Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A
2017-02-01
1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar
2011-11-01
Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
Protection of Space Vehicles from Micrometeoroid/Orbital Debris (MMOD) Damages
NASA Technical Reports Server (NTRS)
Barr, Stephanie
2007-01-01
As the environment that puts space vehicles at risk can never be eliminated, space vehicles must implement protection against the MMOD environment. In general, this protection has been implemented on a risk estimate basis, largely focused on estimates of impactor size and estimated flux. However, there is some uncertainty in applying these methods from data gathered in earth orbit to excursions outside. This paper discusses different past thresholds and processes of the past and suggests additional refinement or methods that could be used for future space endeavors.
Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration
2007-09-01
March 17, 2005. The seismic signals from both master and detected events are followed by infrasound arrivals. Note the long duration of the...correlation coefficient traces with a significant array -gain. A detected event that is co-located with the master event will record the same time-difference...estimating the detection threshold reduction for a range of highly repeating seismic sources using arrays of different configurations and at different
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
Kakinuma, Kaoru; Sasaki, Takehiro; Jamsran, Undarmaa; Okuro, Toshiya; Takeuchi, Kazuhiko
2014-10-01
Applying the threshold concept to rangeland management is an important challenge in semi-arid and arid regions. Threshold recognition and prediction is necessary to enable local pastoralists to prevent the occurrence of an undesirable state that would result from unsustainable grazing pressure, but this requires a better understanding of the pastoralists' perception of vegetation threshold changes. We estimated plant species cover in survey plots along grazing gradients in steppe and desert-steppe areas of Mongolia. We also conducted interviews with local pastoralists and asked them to evaluate whether the plots were suitable for grazing. Floristic composition changed nonlinearly along the grazing gradient in both the desert-steppe and steppe areas. Pastoralists observed the floristic composition changes along the grazing gradients, but their evaluations of grazing suitability did not always decrease along the grazing gradients, both of which included areas in a post-threshold state. These results indicated that local pastoralists and scientists may have different perceptions of vegetation states, even though both of groups used plant species and coverage as indicators in their evaluations. Therefore, in future studies of rangeland management, researchers and pastoralists should exchange their knowledge and perceptions to successfully apply the threshold concept to rangeland management.
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
Nanosecond laser pulses for mimicking thermal effects on nanostructured tungsten-based materials
NASA Astrophysics Data System (ADS)
Besozzi, E.; Maffini, A.; Dellasega, D.; Russo, V.; Facibeni, A.; Pazzaglia, A.; Beghi, M. G.; Passoni, M.
2018-03-01
In this work, we exploit nanosecond laser irradiation as a compact solution for investigating the thermomechanical behavior of tungsten materials under extreme thermal loads at the laboratory scale. Heat flux factor thresholds for various thermal effects, such as melting, cracking and recrystallization, are determined under both single and multishot experiments. The use of nanosecond lasers for mimicking thermal effects induced on W by fusion-relevant thermal loads is thus validated by direct comparison of the thresholds obtained in this work and the ones reported in the literature for electron beams and millisecond laser irradiation. Numerical simulations of temperature and thermal stress performed on a 2D thermomechanical code are used to predict the heat flux factor thresholds of the different thermal effects. We also investigate the thermal effect thresholds of various nanostructured W coatings. These coatings are produced by pulsed laser deposition, mimicking W coatings in tokamaks and W redeposited layers. All the coatings show lower damage thresholds with respect to bulk W. In general, thresholds decrease as the porosity degree of the materials increases. We thus propose a model to predict these thresholds for coatings with various morphologies, simply based on their porosity degree, which can be directly estimated by measuring the variation of the coating mass density with respect to that of the bulk.
Krumm, Bianca; Klump, Georg; Köppl, Christine; Langemann, Ulrike
2017-09-27
We measured the auditory sensitivity of the barn owl ( Tyto alba ), using a behavioural Go/NoGo paradigm in two different age groups, one younger than 2 years ( n = 4) and another more than 13 years of age ( n = 3). In addition, we obtained thresholds from one individual aged 23 years, three times during its lifetime. For computing audiograms, we presented test frequencies of between 0.5 and 12 kHz, covering the hearing range of the barn owl. Average thresholds in quiet were below 0 dB sound pressure level (SPL) for frequencies between 1 and 10 kHz. The lowest mean threshold was -12.6 dB SPL at 8 kHz. Thresholds were the highest at 12 kHz, with a mean of 31.7 dB SPL. Test frequency had a significant effect on auditory threshold but age group had no significant effect. There was no significant interaction between age group and test frequency. Repeated threshold estimates over 21 years from a single individual showed only a slight increase in thresholds. We discuss the auditory sensitivity of barn owls with respect to other species and suggest that birds, which generally show a remarkable capacity for regeneration of hair cells in the basilar papilla, are naturally protected from presbycusis. © 2017 The Author(s).
Koka, Kanthaiah; Saoji, Aniket A; Attias, Joseph; Litvak, Leonid M
2017-01-01
Although, cochlear implants (CI) traditionally have been used to treat individuals with bilateral profound sensorineural hearing loss, a recent trend is to implant individuals with residual low-frequency hearing. Notably, many of these individuals demonstrate an air-bone gap (ABG) in low-frequency, pure-tone thresholds following implantation. An ABG is the difference between audiometric thresholds measured using air conduction (AC) and bone conduction (BC) stimulation. Although, behavioral AC thresholds are straightforward to assess, BC thresholds can be difficult to measure in individuals with severe-to-profound hearing loss because of vibrotactile responses to high-level, low-frequency stimulation and the potential contribution of hearing in the contralateral ear. Because of these technical barriers to measuring behavioral BC thresholds in implanted patients with residual hearing, it would be helpful to have an objective method for determining ABG. This study evaluated an innovative technique for measuring electrocochleographic (ECochG) responses using the cochlear microphonic (CM) response to assess AC and BC thresholds in implanted patients with residual hearing. Results showed high correlations between CM thresholds and behavioral audiograms for AC and BC conditions, thereby demonstrating the feasibility of using ECochG as an objective tool for quantifying ABG in CI recipients.
Physiological and behavioral effects of tilt-induced body fluid shifts
NASA Technical Reports Server (NTRS)
Parker, D. E.; Tjernstrom, O.; Ivarsson, A.; Gulledge, W. L.; Poston, R. L.
1983-01-01
This paper addresses the 'fluid shift theory' of space motion sickness. The primary purpose of the research was the development of procedures to assess individual differences in response to rostral body fluid shifts on earth. Experiment I examined inner ear fluid pressure changes during head-down tilt in intact human beings. Tilt produced reliable changes. Differences among subjects and between ears within the same subject were observed. Experiment II examined auditory threshold changes during tilt. Tilt elicited increased auditory thresholds, suggesting that sensory depression may result from increased inner ear fluid pressure. Additional observations on rotation magnitude estimation during head-down tilt, which indicate that rostral fluid shifts may depress semicircular canal activity, are briefly described. The results of this research suggest that the inner ear pressure and auditory threshold shift procedures could be used to assess individual differences among astronauts prior to space flight. Results from the terrestrial observations could be related to reported incidence/severity of motion sickness in space and used to evaluate the fluid shift theory of space motion sickness.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, M.P.; Rouvray, D.H.
The propensity of hydrocarbons to form soot in a diffusion flame is correlated here for the first time against various topological indices. Two of the indices, the hydrogen deficiency index, and the Balaban distance-sum connectivity index were found to be especially valuable for correlational purposes. For a total of 98 hydrocarbon fuel moelcules, of differing types, regression analyses yielded good correlations between the threshold soot indices (TSIs) for diffusion flames and these two indices. An equation that can be used to estimate TSI values in fuel molecules is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, M.P.; Rouvray, D.H.
The propensity of hydrocarbons to form soot in a diffusion flame is correlated here for the first time against various topological indices. Two of the indices, the hydrogen deficiency index and the Balaban distance sum connectivity index, were found to be especially valuable for correlational purposes. For the total of 98 hydrocarbon fuel molecules of differing types, regression analyses yielded good correlations between the threshold soot indices (TSIs) for diffusion flames and these two indices. An equation which can be used to estimate TSI values in fuel molecules is presented.
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu’s method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production. PMID:22163940
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu's method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production.
Further studies to extend and test the area-time-integral technique applied to satellite data
NASA Technical Reports Server (NTRS)
Smith, Paul L.; Vonderhaar, Thomas H.
1993-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'variable-threshold approach'. In the former approach, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are most closely related to the corresponding rainfall amounts. Results thus far have indicated that a strong correlation exists between the rain volumes and the satellite ATI values, but the optimum threshold for this relationship seems to differ from one geographic location to another. The difference is probably related to differences in the basic precipitation mechanisms that dominate in the different regions. The average rainfall rate associated with each cloudy pixel is also found to vary across the spectrum of ATI values. Work on the second, or 'variable-threshold', approach for determining the satellite ATI values was essentially suspended during this period due to exhaustion of project funds. Most of the ATI work thus far has dealt with cloud clusters from the Lagrangian or 'floating-target' point of view. For many purposes, however, the Eulerian or 'fixed-target' perspective is more appropriate. For a very large target area encompassing entire cluster life histories, the rain volume-ATI relationship would obviously be the same in either case. The important question for the Eulerian perspective is how small the fixed area can be made while maintaining consistency in that relationship.
Wang, Z; Gu, J; Jiang, X J
2017-04-20
Objective: To learn the relationship between the auditory steady state responses(ASSR)threshold and C-level and behavior T-level in cochlear implants in prelingually deaf children. Method: One hundred and twelve children with Nucleus CI24R(CA) cochlear implants were divided into residual hearing group and no residual hearing group on the basis of the results of ASSR before operation in this study.Compare the difference between the two groups in C-level and behavior T-level one year after operation. Result: There was difference in C-level and behavior T-level between residual hearing group and no residual hearing group( P <0.05 or P <0.01). Conclusion: According to the results of ASSR before operation,we can estimate the effect of cochlear implants,providing reference for the selection of choosing operating ears,and providing a reasonable expectation for physicians and parents of the patients. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc
2016-04-01
In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Time Perception and Depressive Realism: Judgment Type, Psychophysical Functions and Bias
Kornbrot, Diana E.; Msetfi, Rachel M.; Grimwood, Melvyn J.
2013-01-01
The effect of mild depression on time estimation and production was investigated. Participants made both magnitude estimation and magnitude production judgments for five time intervals (specified in seconds) from 3 sec to 65 sec. The parameters of the best fitting psychophysical function (power law exponent, intercept, and threshold) were determined individually for each participant in every condition. There were no significant effects of mood (high BDI, low BDI) or judgment (estimation, production) on the mean exponent, n = .98, 95% confidence interval (.96–1.04) or on the threshold. However, the intercept showed a ‘depressive realism’ effect, where high BDI participants had a smaller deviation from accuracy and a smaller difference between estimation and judgment than low BDI participants. Accuracy bias was assessed using three measures of accuracy: difference, defined as psychological time minus physical time, ratio, defined as psychological time divided by physical time, and a new logarithmic accuracy measure defined as ln (ratio). The ln (ratio) measure was shown to have approximately normal residuals when subjected to a mixed ANOVA with mood as a between groups explanatory factor and judgment and time category as repeated measures explanatory factors. The residuals of the other two accuracy measures flagrantly violated normality. The mixed ANOVAs of accuracy also showed a strong depressive realism effect, just like the intercepts of the psychophysical functions. There was also a strong negative correlation between estimation and production judgments. Taken together these findings support a clock model of time estimation, combined with additional cognitive mechanisms to account for the depressive realism effect. The findings also suggest strong methodological recommendations. PMID:23990960
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, Michael G.; Mendell, Mark J.; Sohn, Michael D.
Through mass-balance modeling of various ventilation scenarios that might satisfy the ASHRAE 62.1 Indoor Air Quality (IAQ) Procedure, we estimate indoor concentrations of contaminants of concern (COCs) in California “big box” stores, compare estimates to available thresholds, and for selected scenarios estimate differences in energy consumption. Findings are intended to inform decisions on adding performance-based approaches to ventilation rate (VR) standards for commercial buildings. Using multi-zone mass-balance models and available contaminant source rates, we estimated concentrations of 34 COCs for multiple ventilation scenarios: VRmin (0.04 cfm/ft2 ), VRmax (0.24 cfm/ft2 ), and VRmid (0.14 cfm/ft2 ). We compared COC concentrationsmore » with available health, olfactory, and irritant thresholds. We estimated building energy consumption at different VRs using a previously developed EnergyPlus model. VRmax did control all contaminants adequately, but VRmin did not, and VRmid did so only marginally. Air cleaning and local ventilation near strong sources both showed promise. Higher VRs increased indoor concentrations of outdoor air pollutants. Lowering VRs in big box stores in California from VRmax to VRmid would reduce total energy use by an estimated 6.6% and energy costs by 2.5%. Reducing the required VRs in California’s big box stores could reduce energy use and costs, but poses challenges for health and comfort of occupants. Source removal, air cleaning, and local ventilation may be needed at reduced VRs, and even at current recommended VRs. Also, alternative ventilation strategies taking climate and season into account in ventilation schedules may provide greater energy cost savings than constant ventilation rates, while improving IAQ.« less
Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.
Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W
2018-05-18
Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
Gülay, Arda; Smets, Barth F
2015-09-01
Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.
Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon
2017-01-01
[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765
Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok
2017-09-30
The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition
Sazykina, Tatiana G
2018-02-01
Model predictions of population response to chronic ionizing radiation (endpoint 'morbidity') were made for 11 species of warm-blooded animals, differing in body mass and lifespan - from mice to elephant. Predictions were made also for 3 bird species (duck, pigeon, and house sparrow). Calculations were based on analytical solutions of the mathematical model, simulating a population response to low-LET ionizing radiation in an ecosystem with a limiting resource (Sazykina, Kryshev, 2016). Model parameters for different species were taken from biological and radioecological databases; allometric relationships were employed for estimating some parameter values. As a threshold of decreased health status in exposed populations ('health threshold'), a 10% reduction in self-repairing capacity of organisms was suggested, associated with a decline in ability to sustain environmental stresses. Results of the modeling demonstrate a general increase of population vulnerability to ionizing radiation in animal species of larger size and longevity. Populations of small widespread species (mice, house sparrow; body mass 20-50 g), which are characterized by intensive metabolism and short lifespan, have calculated 'health thresholds' at dose rates about 6.5-7.5 mGy day -1 . Widespread animals with body mass 200-500 g (rat, common pigeon) - demonstrate 'health threshold' values at 4-5 mGy day -1 . For populations of animals with body mass 2-5 kg (rabbit, fox, raccoon), the indicators of 10% health decrease are in the range 2-3.4 mGy day -1 . For animals with body mass 40-100 kg (wolf, sheep, wild boar), thresholds are within 0.5-0.8 mGy day -1 ; for herbivorous animals with body mass 200-300 kg (deer, horse) - 0.5-0.6 mGy day -1 . The lowest health threshold was estimated for elephant (body mass around 5000 kg) - 0.1 mGy day -1 . According to the model results, the differences in population sensitivities of warm-blooded animal species to ionizing radiation are generally depended on the metabolic rate and longevity of organisms, also on individual radiosensitivity of biological tissues. The results of 'health threshold' calculations are formulated as a graded scale of wildlife sensitivities to chronic radiation stress, ranging from potentially vulnerable to more resistant species. Further studies are needed to expand the scale of population sensitivities to radiation, including other groups of wildlife - cold-blooded species, invertebrates, and plants. Copyright © 2017 Elsevier Ltd. All rights reserved.
Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.
2018-01-01
Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.
Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B
2018-01-01
Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.
Determination of Cost-Effectiveness Threshold for Health Care Interventions in Malaysia.
Lim, Yen Wei; Shafie, Asrul Akmal; Chua, Gin Nie; Ahmad Hassali, Mohammed Azmi
2017-09-01
One major challenge in prioritizing health care using cost-effectiveness (CE) information is when alternatives are more expensive but more effective than existing technology. In such a situation, an external criterion in the form of a CE threshold that reflects the willingness to pay (WTP) per quality-adjusted life-year is necessary. To determine a CE threshold for health care interventions in Malaysia. A cross-sectional, contingent valuation study was conducted using a stratified multistage cluster random sampling technique in four states in Malaysia. One thousand thirteen respondents were interviewed in person for their socioeconomic background, quality of life, and WTP for a hypothetical scenario. The CE thresholds established using the nonparametric Turnbull method ranged from MYR12,810 to MYR22,840 (~US $4,000-US $7,000), whereas those estimated with the parametric interval regression model were between MYR19,929 and MYR28,470 (~US $6,200-US $8,900). Key factors that affected the CE thresholds were education level, estimated monthly household income, and the description of health state scenarios. These findings suggest that there is no single WTP value for a quality-adjusted life-year. The CE threshold estimated for Malaysia was found to be lower than the threshold value recommended by the World Health Organization. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Development of a precipitation-area curve for warning criteria of short-duration flash flood
NASA Astrophysics Data System (ADS)
Bae, Deg-Hyo; Lee, Moon-Hwan; Moon, Sung-Keun
2018-01-01
This paper presents quantitative criteria for flash flood warning that can be used to rapidly assess flash flood occurrence based on only rainfall estimates. This study was conducted for 200 small mountainous sub-catchments of the Han River basin in South Korea because South Korea has recently suffered many flash flood events. The quantitative criteria are calculated based on flash flood guidance (FFG), which is defined as the depth of rainfall of a given duration required to cause frequent flooding (1-2-year return period) at the outlet of a small stream basin and is estimated using threshold runoff (TR) and antecedent soil moisture conditions in all sub-basins. The soil moisture conditions were estimated during the flooding season, i.e., July, August and September, over 7 years (2002-2009) using the Sejong University Rainfall Runoff (SURR) model. A ROC (receiver operating characteristic) analysis was used to obtain optimum rainfall values and a generalized precipitation-area (P-A) curve was developed for flash flood warning thresholds. The threshold function was derived as a P-A curve because the precipitation threshold with a short duration is more closely related to basin area than any other variables. For a brief description of the P-A curve, generalized thresholds for flash flood warnings can be suggested for rainfall rates of 42, 32 and 20 mm h-1 in sub-basins with areas of 22-40, 40-100 and > 100 km2, respectively. The proposed P-A curve was validated based on observed flash flood events in different sub-basins. Flash flood occurrences were captured for 9 out of 12 events. This result can be used instead of FFG to identify brief flash flood (less than 1 h), and it can provide warning information to decision-makers or citizens that is relatively simple, clear and immediate.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
I feel good! Gender differences and reporting heterogeneity in self-assessed health.
Schneider, Udo; Pfarr, Christian; Schneider, Brit S; Ulrich, Volker
2012-06-01
For empirical analysis and policy-oriented recommendations, the precise measurement of individual health or well-being is essential. The difficulty is that the answer may depend on individual reporting behaviour. Moreover, if an individual's health perception varies with certain attitudes of the respondent, reporting heterogeneity may lead to index or cut-point shifts of the health distribution, causing estimation problems. An index shift is a parallel shift in the thresholds of the underlying distribution of health categories. In contrast, a cut-point shift means that the relative position of the thresholds changes, implying different response behaviour. Our paper aims to detect how socioeconomic determinants and health experiences influence the individual valuation of health. We analyse the reporting behaviour of individuals on their self-assessed health status, a five-point categorical variable. Using German panel data, we control for observed heterogeneity in the categorical health variable as well as unobserved individual heterogeneity in the panel estimation. In the empirical analysis, we find strong evidence for cut-point shifts. Our estimation results show different impacts of socioeconomic and health-related variables on the five categories of self-assessed health. Moreover, the answering behaviour varies between female and male respondents, pointing to gender-specific perception and assessment of health. Hence, in case of reporting heterogeneity, using self-assessed measures in empirical studies may be misleading and the information needs to be handled with care.
Location Estimation of Urban Images Based on Geographical Neighborhoods
NASA Astrophysics Data System (ADS)
Huang, Jie; Lo, Sio-Long
2018-04-01
Estimating the location of an image is a challenging computer vision problem, and the recent decade has witnessed increasing research efforts towards the solution of this problem. In this paper, we propose a new approach to the location estimation of images taken in urban environments. Experiments are conducted to quantitatively compare the estimation accuracy of our approach, against three representative approaches in the existing literature, using a recently published dataset of over 150 thousand Google Street View images and 259 user uploaded images as queries. According to the experimental results, our approach outperforms three baseline approaches and shows its robustness across different distance thresholds.
Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F
2017-04-01
A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding values for hoof lesion resistance corresponded to sires with a higher prevalence of healthy daughters. The rank correlations between estimated breeding values ranged from 0.96 to 0.99 when predicted using either one of the 2 cohorts and from 0.94 to 0.99 when predicted using either a threshold or a linear model. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Local health care expenditure plans and their opportunity costs.
Karlsberg Schaffer, Sarah; Sussex, Jon; Devlin, Nancy; Walker, Andrew
2015-09-01
In the UK, approval decisions by Health Technology Assessment bodies are made using a cost per quality-adjusted life year (QALY) threshold, the value of which is based on little empirical evidence. We test the feasibility of estimating the "true" value of the threshold in NHS Scotland using information on marginal services (those planned to receive significant (dis)investment). We also explore how the NHS makes spending decisions and the role of cost per QALY evidence in this process. We identify marginal services using NHS Board-level responses to the 2012/13 Budget Scrutiny issued by the Scottish Government, supplemented with information on prioritisation processes derived from interviews with Finance Directors. We search the literature for cost-effectiveness evidence relating to marginal services. The cost-effectiveness estimates of marginal services vary hugely and thus it was not possible to obtain a reliable estimate of the threshold. This is unsurprising given the finding that cost-effectiveness evidence is rarely used to justify expenditure plans, which are driven by a range of other factors. Our results highlight the differences in objectives between HTA bodies and local health service decision makers. We also demonstrate that, even if it were desirable, the use of cost-effectiveness evidence at local level would be highly challenging without extensive investment in health economics resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Levine, Michael; Stellpflug, Sam; Pizon, Anthony F; Traub, Stephen; Vohra, Rais; Wiegand, Timothy; Traub, Nicole; Tashman, David; Desai, Shoma; Chang, Jamie; Nathwani, Dhruv; Thomas, Stephen
2017-07-01
Acetaminophen toxicity is common in clinical practice. In recent years, several European countries have lowered the treatment threshold, which has resulted in increased number of patients being treated at a questionable clinical benefit. The primary objective of this study is to estimate the cost and associated burden to the United States (U.S.) healthcare system, if such a change were adopted in the U.S. This study is a retrospective review of all patients age 14 years or older who were admitted to one of eight different hospitals located throughout the U.S. with acetaminophen exposures during a five and a half year span, encompassing from 1 January 2008 to 30 June 2013. Those patients who would be treated with the revised nomogram, but not the current nomogram were included. The cost of such treatment was extrapolated to a national level. 139 subjects were identified who would be treated with the revised nomogram, but not the current nomogram. Extrapolating these numbers nationally, an additional 4507 (95%CI 3641-8751) Americans would be treated annually for acetaminophen toxicity. The cost of lowering the treatment threshold is estimated to be $45 million (95%CI 36,400,000-87,500,000) annually. Adopting the revised treatment threshold in the U.S. would result in a significant cost, yet provide an unclear clinical benefit.
Sri Lankan FRAX model and country-specific intervention thresholds.
Lekamwasam, Sarath
2013-01-01
There is a wide variation in fracture probabilities estimated by Asian FRAX models, although the outputs of South Asian models are concordant. Clinicians can choose either fixed or age-specific intervention thresholds when making treatment decisions in postmenopausal women. Cost-effectiveness of such approach, however, needs to be addressed. This study examined suitable fracture probability intervention thresholds (ITs) for Sri Lanka, based on the Sri Lankan FRAX model. Fracture probabilities were estimated using all Asian FRAX models for a postmenopausal woman of BMI 25 kg/m² and has no clinical risk factors apart from a fragility fracture, and they were compared. Age-specific ITs were estimated based on the Sri Lankan FRAX model using the method followed by the National Osteoporosis Guideline Group in the UK. Using the age-specific ITs as the reference standard, suitable fixed ITs were also estimated. Fracture probabilities estimated by different Asian FRAX models varied widely. Japanese and Taiwan models showed higher fracture probabilities while Chinese, Philippine, and Indonesian models gave lower fracture probabilities. Output of remaining FRAX models were generally similar. Age-specific ITs of major osteoporotic fracture probabilities (MOFP) based on the Sri Lankan FRAX model varied from 2.6 to 18% between 50 and 90 years. ITs of hip fracture probabilities (HFP) varied from 0.4 to 6.5% between 50 and 90 years. In finding fixed ITs, MOFP of 11% and HFP of 3.5% gave the lowest misclassification and highest agreement. Sri Lankan FRAX model behaves similar to other Asian FRAX models such as Indian, Singapore-Indian, Thai, and South Korean. Clinicians may use either the fixed or age-specific ITs in making therapeutic decisions in postmenopausal women. The economical aspects of such decisions, however, need to be considered.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
Guo, Xiasheng; Li, Qian; Zhang, Zhe; Zhang, Dong; Tu, Juan
2013-08-01
The inertial cavitation (IC) activity of ultrasound contrast agents (UCAs) plays an important role in the development and improvement of ultrasound diagnostic and therapeutic applications. However, various diagnostic and therapeutic applications have different requirements for IC characteristics. Here through IC dose quantifications based on passive cavitation detection, IC thresholds were measured for two commercialized UCAs, albumin-shelled KangRun(®) and lipid-shelled SonoVue(®) microbubbles, at varied UCA volume concentrations (viz., 0.125 and 0.25 vol. %) and acoustic pulse lengths (viz., 5, 10, 20, 50, and 100 cycles). Shell elastic and viscous coefficients of UCAs were estimated by fitting measured acoustic attenuation spectra with Sarkar's model. The influences of sonication condition (viz., acoustic pulse length) and UCA shell properties on IC threshold were discussed based on numerical simulations. Both experimental measurements and numerical simulations indicate that IC thresholds of UCAs decrease with increasing UCA volume concentration and acoustic pulse length. The shell interfacial tension and dilatational viscosity estimated for SonoVue (0.7 ± 0.11 N/m, 6.5 ± 1.01 × 10(-8) kg/s) are smaller than those of KangRun (1.05 ± 0.18 N/m, 1.66 ± 0.38 × 10(-7) kg/s); this might result in lower IC threshold for SonoVue. The current results will be helpful for selecting and utilizing commercialized UCAs for specific clinical applications, while minimizing undesired IC-induced bioeffects.
The impact of climate change on ozone-related mortality in Sydney.
Physick, William; Cope, Martin; Lee, Sunhee
2014-01-13
Coupled global, regional and chemical transport models are now being used with relative-risk functions to determine the impact of climate change on human health. Studies have been carried out for global and regional scales, and in our paper we examine the impact of climate change on ozone-related mortality at the local scale across an urban metropolis (Sydney, Australia). Using three coupled models, with a grid spacing of 3 km for the chemical transport model (CTM), and a mortality relative risk function of 1.0006 per 1 ppb increase in daily maximum 1-hour ozone concentration, we evaluated the change in ozone concentrations and mortality between decades 1996-2005 and 2051-2060. The global model was run with the A2 emissions scenario. As there is currently uncertainty regarding a threshold concentration below which ozone does not impact on mortality, we calculated mortality estimates for the three daily maximum 1-hr ozone concentration thresholds of 0, 25 and 40 ppb. The mortality increase for 2051-2060 ranges from 2.3% for a 0 ppb threshold to 27.3% for a 40 ppb threshold, although the numerical increases differ little. Our modeling approach is able to identify the variation in ozone-related mortality changes at a suburban scale, estimating that climate change could lead to an additional 55 to 65 deaths across Sydney in the decade 2051-2060. Interestingly, the largest increases do not correspond spatially to the largest ozone increases or the densest population centres. The distribution pattern of changes does not seem to vary with threshold value, while the magnitude only varies slightly.
Point estimation following two-stage adaptive threshold enrichment clinical trials.
Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel
2018-05-31
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
Perceptual thresholds for non-ideal diffuse field reverberation.
Romblom, David; Guastavino, Catherine; Depalle, Philippe
2016-11-01
The objective of this study is to understand listeners' sensitivity to directional variations in non-ideal diffuse field reverberation. An ABX discrimination test was conducted using a semi-spherical 28-loudspeaker array; perceptual thresholds were estimated by systematically varying the level of a segment of loudspeakers for lateral, height, and frontal conditions. The overall energy was held constant using a gain compensation scheme. When compared to an ideal diffuse field, the perceptual threshold for detection is -2.5 dB for the lateral condition, -6.8 dB for the height condition, and -3.2 dB for the frontal condition. Measurements of the experimental stimuli were analyzed using a Head and Torso Simulator as well as with opposing cardioid microphones aligned on the three Cartesian axes. Additionally, opposing cardioid measurements made in an acoustic space demonstrate that level differences corresponding to the perceptual thresholds can be found in practice. These results suggest that non-ideal diffuse field reverberation may be a previously unrecognized component of spatial impression.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
NASA Technical Reports Server (NTRS)
Meier, M. J.; Evans, W. E.
1975-01-01
Snow-covered areas on LANDSAT (ERTS) images of the Santiam River basin, Oregon, and other basins in Washington were measured using several operators and methods. Seven methods were used: (1) Snowline tracing followed by measurement with planimeter, (2) mean snowline altitudes determined from many locations, (3) estimates in 2.5 x 2.5 km boxes of snow-covered area with reference to snow-free images, (4) single radiance-threshold level for entire basin, (5) radiance-threshold setting locally edited by reference to altitude contours and other images, (6) two-band color-sensitive extraction locally edited as in (5), and (7) digital (spectral) pattern recognition techniques. The seven methods are compared in regard to speed of measurement, precision, the ability to recognize snow in deep shadow or in trees, relative cost, and whether useful supplemental data are produced.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
On thermal corrections to near-threshold annihilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Laine, M., E-mail: skim@sejong.ac.kr, E-mail: laine@itp.unibe.ch
2017-01-01
We consider non-relativistic ''dark'' particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated annihilating TeV scale particles. Weakly bound states are shown to 'melt' below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the annihilation ratemore » by a factor 4 ... 80 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.« less
NASA Astrophysics Data System (ADS)
Meneghini, Robert
1998-09-01
A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.
NASA Astrophysics Data System (ADS)
Shirazi, M. R.; Mohamed Taib, J.; De La Rue, R. M.; Harun, S. W.; Ahmad, H.
2015-03-01
Dynamic characteristics of a multi-wavelength Brillouin-Raman fiber laser (MBRFL) assisted by four-wave mixing have been investigated through the development of Stokes and anti-Stokes lines under different combinations of Brillouin and Raman pump power levels and different Raman pumping schemes in a ring cavity. For a Stokes line of order higher than three, the threshold power was less than the saturation power of its last-order Stokes line. By increasing the Brillouin pump power, the nth order anti-Stokes and the (n+4)th order Stokes power levels were unexpectedly increased almost the same before the Stokes line threshold power. It was also found out that the SBS threshold reduction (SBSTR) depended linearly on the gain factor for the 1st and 2nd Stokes lines, as the first set. This relation for the 3rd and 4th Stokes lines as the second set, however, was almost linear with the same slope before SBSTR -6 dB, then, it approached to the linear relation in the first set when the gain factor was increased to 50 dB. Therefore, the threshold power levels of Stokes lines for a given Raman gain can be readily estimated only by knowing the threshold power levels in which there is no Raman amplification.
An Explosion Aftershock Model with Application to On-Site Inspection
NASA Astrophysics Data System (ADS)
Ford, Sean R.; Labak, Peter
2016-01-01
An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using an aftershock rate model. The model is developed with data from the Nevada National Security Site, formerly known as the Nevada Test Site, and the Semipalatinsk Test Site, which we take to represent soft-rock and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help inform the Seismic Aftershock Monitoring System (SAMS) deployment in a potential Comprehensive Test Ban Treaty On-Site Inspection (OSI), by giving the OSI team a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment, combined with an estimate of the background seismicity in the IA and an empirically derived map of threshold magnitude for the SAMS network, could aid the OSI team in reporting. We apply the hard-rock model to a M5 event and combine it with the very sensitive detection threshold for OSI sensors to show that tens of events per day are expected up to a month after an explosion measured several kilometers away.
An explosion aftershock model with application to on-site inspection
Ford, Sean R.; Labak, Peter
2015-02-14
An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using an aftershock rate model. The model is developed with data from the Nevada National Security Site, formerly known as the Nevada Test Site, and the Semipalatinsk Test Site, which we take to represent soft-rock and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help inform the Seismic Aftershock Monitoring System (SAMS) deployment in a potential Comprehensive Test Ban Treaty On-Site Inspection (OSI), by giving the OSI teammore » a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment, combined with an estimate of the background seismicity in the IA and an empirically derived map of threshold magnitude for the SAMS network, could aid the OSI team in reporting. Here, we apply the hard-rock model to a M5 event and combine it with the very sensitive detection threshold for OSI sensors to show that tens of events per day are expected up to a month after an explosion measured several kilometers away.« less
Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu
2016-08-18
Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m(2), respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Binaural Release from Masking for a Speech Sound in Infants, Preschoolers, and Adults.
ERIC Educational Resources Information Center
Nozza, Robert J.
1988-01-01
Binaural masked thresholds for a speech sound (/ba/) were estimated under two interaural phase conditions in three age groups (infants, preschoolers, adults). Differences as a function of both age and condition and effects of reducing intensity for adults were significant in indicating possible developmental binaural hearing changes, especially…
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
2015-01-01
Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and suggest scientists reevaluate assumptions of these models used to guide habitat conservation. PMID:25790255
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Sugar reduction in fruit nectars: Impact on consumers' sensory and hedonic perception.
Oliveira, Denize; Galhardo, Juliana; Ares, Gastón; Cunha, Luís M; Deliza, Rosires
2018-05-01
Sugar sweetened beverages are one of the main sources of added sugar in the diet. Therefore, sugar reduction in these products could contribute to the prevention of various negative health conditions, such as obesity, diabetes and cardiovascular diseases. In this context, the present work aimed to study consumer sensory and hedonic perception towards sugar reduction in fruit nectars. Five sequential difference thresholds for added sugar in three fruit nectars (passion fruit, orange/passion fruit and orange/pomegranate) were determined based on consumer perception. In each test, difference thresholds were estimated using survival analysis based on the responses of 50 consumers to six paired-comparison tests. Each pair was composed of two samples, a control nectar and a sample that was reduced in added sugar from the control. Consumers were asked to try each of the samples in each pair and to indicate which was sweeter. Then, consumers' sensory and hedonic perception of nectar samples was evaluated for each nectar using a 9-point hedonic scale and a check-all-that-apply question. Difference thresholds were estimated in 4.20%-8.14% of the added sugar concentration of the nectars. No significant differences in overall liking were detected for fruit nectars with 20% sugar reduction. However, large heterogeneity in consumer hedonic reaction towards sugar reduction was found, which should be taken into account in the design of sugar reduction programs. Consumer hedonic reaction towards sugar reduction was product dependent. Results from the present work reinforce the idea that gradual sugar reduction in sugar sweetened beverages is a feasible strategy that could contribute to reduce the sugar intake of the population. Copyright © 2018 Elsevier Ltd. All rights reserved.
Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data
NASA Astrophysics Data System (ADS)
Bouchami, J.; Dallaire, F.; Gutiérrez, A.; Idarraga, J.; Král, V.; Leroy, C.; Picard, S.; Pospíšil, S.; Scallon, O.; Solc, J.; Suk, M.; Turecek, D.; Vykydal, Z.; Žemlièka, J.
2011-01-01
The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) — based on the ROOT application — allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons (252Cf and 241AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-10-01
A threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the Meteosat SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low-resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures that cannot be detected by the low-resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behavior for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test data set depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as an estimate of cloud fraction. The HRV cloud mask aims for small-scale convective sub-pixel clouds that are missed by the EUMETSAT cloud mask. The major limit of the HRV cloud mask is the minimum cloud optical thickness (COT) that can be detected. This threshold COT was found to be about 0.8 over ocean and 2 over land and is highly related to the albedo of the underlying surface.
ERIC Educational Resources Information Center
Zaitoun, Maha; Cumming, Steven; Purcell, Alison; O'Brien, Katie
2017-01-01
Purpose: This study assesses the impact of patient clinical history on audiologists' performance when interpreting auditory brainstem response (ABR) results. Method: Fourteen audiologists' accuracy in estimating hearing threshold for 16 infants through interpretation of ABR traces was compared on 2 occasions at least 5 months apart. On the 1st…
The risk of water scarcity at different levels of global warming
NASA Astrophysics Data System (ADS)
Schewe, Jacob; Sharpe, Simon
2015-04-01
Water scarcity is a threat to human well-being and economic development in many countries today. Future climate change is expected to exacerbate the global water crisis by reducing renewable freshwater resources different world regions, many of which are already dry. Studies of future water scarcity often focus on most-likely, or highest-confidence, scenarios. However, multi-model projections of water resources reveal large uncertainty ranges, which are due to different types of processes (climate, hydrology, human) and are therefore not easy to reduce. Thus, central estimates or multi-model mean results may be insufficient to inform policy and management. Here we present an alternative, risk-based approach. We use an ensemble of multiple global climate and hydrological models to quantify the likelihood of crossing a given water scarcity threshold under different levels of global warming. This approach allows assessing the risk associated with any particular, pre-defined threshold (or magnitude of change that must be avoided), regardless of whether it lies in the center or in the tails of the uncertainty distribution. We show applications of this method on the country and river basin scale, illustrate the effects of societal processes on the resulting risk estimates, and discuss the further potential of this approach for research and stakeholder dialogue.
NASA Astrophysics Data System (ADS)
Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.
2012-03-01
Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
NASA Astrophysics Data System (ADS)
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.
Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.
Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D
2007-06-21
Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.
Shen, Yi
2015-01-01
Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722
Gas composition sensing using carbon nanotube arrays
NASA Technical Reports Server (NTRS)
Li, Jing (Inventor); Meyyappan, Meyya (Inventor)
2008-01-01
A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.
Ebihara, Satoru; Ebihara, Takae; Kanezaki, Masashi; Gui, Peijun; Yamasaki, Miyako; Arai, Hiroyuki; Kohzuki, Masahiro
2011-06-28
The effect of aging on the cognitive aspect of cough has not been studied yet. The purpose of this study is to investigate the aging effect on the perception of urge-to-cough in healthy individuals. Fourteen young, female, healthy never-smokers were recruited via public postings. Twelve elderly female healthy never-smokers were recruited from a nursing home residence. The cough reflex threshold and the urge-to-cough were evaluated by inhalation of citric acid. The cough reflex sensitivities were defined as the lowest concentration of citric acid that elicited two or more coughs (C2) and five or more coughs (C5). The urge-to-cough was evaluated using a modified the Borg scale. There was no significant difference in the cough reflex threshold to citric acid between young and elderly subjects. The urge-to-cough scores at the concentration of C2 and C5 were significantly smaller in the elderly than young subjects. The urge-to-cough log-log slope in elderly subjects (0.73 ± 0.71 point · L/g) was significantly gentler than those of young subjects (1.35 ± 0.53 point · L/g, p < 0.01). There were no significant differences in the urge-to-cough threshold estimated between young and elderly subjects. The cough reflex threshold did not differ between young and elderly subjects whereas cognition of urge-to-cough was significantly decreased in elderly subjects in female never-smokers. Objective monitoring of cough might be important in the elderly people.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
Assessment of Marine Mammal Impact Zones for Use of Military Sonar in the Baltic Sea.
Andersson, Mathias H; Johansson, Torbjörn
2016-01-01
Military sonars are known to have caused cetaceans to strand. Navies in shallow seas use different frequencies and sonar pulses, commonly frequencies between 25 and 100 kHz, compared with most studied NATO sonar systems that have been evaluated for their environmental impact. These frequencies match the frequencies of best hearing in the harbor porpoises and seals resident in the Baltic Sea. This study uses published temporary and permanent threshold shifts, measured behavioral response thresholds, technical specifications of a sonar system, and environmental parameters affecting sound propagation common for the Baltic Sea to estimate the impact zones for harbor porpoises and seals.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Comparison of Ventilatory Measures and 20 km Time Trial Performance.
Peveler, Willard W; Shew, Brandy; Johnson, Samantha; Sanders, Gabe; Kollock, Roger
2017-01-01
Performance threshold measures are used to predict cycling performance. Previous research has focused on long time trials (≥ 40 km) using power at ventilatory threshold and respiratory threshold to estimate time trial performance. As intensity greatly differs during shorter time trails applying findings from longer time trials may not be appropriate. The use of heart rate measures to determine 20 km time trial performance has yet to be examined. The purpose of this study was to determine the effectiveness of heart rate measures at ventilatory threshold (VE/VO 2 Plotted and VT determined by software) and respiratory threshold (RER of 0.95, 1.00, and 1.05) to predict 20 km time trial performance. Eighteen cyclists completed a VO 2max protocol and two 20 km time trials. Average heart rates from 20 km time trials were compared with heart rates from performance threshold measures (VT plotted, VT software, and an RER at 0.95, 1.00, and 1.05) using repeated measures ANOVA. Significance was set a priori at P ≤ 0.05. The only measure not found to be significantly different in relation to time trial performance was HR at an RER of 1.00 (166.61±12.70 bpm vs. 165.89 ± 9.56 bpm, p = .671). VT plotting and VT determined by software were found to underestimate time trial performance by 3% and 8% respectively. From these findings it is recommended to use heart rate at a RER of 1.00 in order to determine 20 km time trial intensity.
Estimation of frequency offset in mobile satellite modems
NASA Technical Reports Server (NTRS)
Cowley, W. G.; Rice, M.; Mclean, A. N.
1993-01-01
In mobilesat applications, frequency offset on the received signal must be estimated and removed prior to further modem processing. A straightforward method of estimating the carrier frequency offset is to raise the received MPSK signal to the M-th power, and then estimate the location of the peak spectral component. An analysis of the lower signal to noise threshold of this method is carried out for BPSK signals. Predicted thresholds are compared to simulation results. It is shown how the method can be extended to pi/M MPSK signals. A real-time implementation of frequency offset estimation for the Australian mobile satellite system is described.
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
Influence of sensor ingestion timing on consistency of temperature measures.
Goodman, Daniel A; Kenefick, Robert W; Cadarette, Bruce S; Cheuvront, Samuel N
2009-03-01
The validity and the reliability of using intestinal temperature (T int) via ingestible temperature sensors (ITS) to measure core body temperature have been demonstrated. However, the effect of elapsed time between ITS ingestion and T int measurement has not been thoroughly studied. Eight volunteers (six men and two women) swallowed ITS 5 h (ITS-5) and 29 h (ITS-29) before 4 h of varying intensity activity. T int was measured simultaneously from both ITS, and T int differences between the ITS-5 and the ITS-29 over the 4 h of activity were plotted and compared relative to a meaningful threshold of acceptance (+/-0.25 degrees C). The percentage of time in which the differences between paired ITS (ITS-5 vs ITS-29) were greater than or less than the threshold of acceptance was calculated. T int values showed no systematic bias, were normally distributed, and ranged from 36.94 degrees C to 39.24 degrees C. The maximum T int difference between paired ITS was 0.83 degrees C with a minimum difference of 0.00 degrees C. The typical magnitude of the differences (SE of the estimate) was 0.24 degrees C, and these differences were uniform across the entire range of observed temperatures. Paired T int measures fell outside of the threshold of acceptance 43.8% of the time during the 4 h of activity. The differences between ITS-5 and ITS-29 were larger than the threshold of acceptance during a substantial portion of the observed 4-h activity period. Ingesting an ITS more than 5 h before activity will not completely eliminate confounding factors but may improve accuracy and consistency of core body temperature.
Sukumar, Subash; Waugh, Sarah J
2007-03-01
We estimated spatial summation areas for the detection of luminance-modulated (LM) and contrast-modulated (CM) blobs at the fovea, 2.5, 5 and 10 deg eccentrically. Gaussian profiles were added or multiplied to binary white noise to create LM and CM blob stimuli and these were used to psychophysically estimate detection thresholds and spatial summation areas. The results reveal significantly larger summation areas for detecting CM than LM blobs across eccentricity. These differences are comparable to receptive field size estimates made in V1 and V2. They support the notion that separate spatial processing occurs for the detection of LM and CM stimuli.
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Dennis R. Becker; Debra Larson; Eini C. Lowell; Robert B. Rummer
2008-01-01
The HCR (Harvest Cost-Revenue) Estimator is engineering and financial analysis software used to evaluate stand-level financial thresholds for harvesting small-diameter ponderosa pine (Pinus ponderosa Dougl. ex Laws.) in the Southwest United States. The Windows-based program helps contractors and planners to identify costs associated with tree...
What is the optimal task difficulty for reinforcement learning of brain self-regulation?
Bauer, Robert; Vukelić, Mathias; Gharabaghi, Alireza
2016-09-01
The balance between action and reward during neurofeedback may influence reinforcement learning of brain self-regulation. Eleven healthy volunteers participated in three runs of motor imagery-based brain-machine interface feedback where a robot passively opened the hand contingent to β-band modulation. For each run, the β-desynchronization threshold to initiate the hand robot movement increased in difficulty (low, moderate, and demanding). In this context, the incentive to learn was estimated by the change of reward per action, operationalized as the change in reward duration per movement onset. Variance analysis revealed a significant interaction between threshold difficulty and the relationship between reward duration and number of movement onsets (p<0.001), indicating a negative learning incentive for low difficulty, but a positive learning incentive for moderate and demanding runs. Exploration of different thresholds in the same data set indicated that the learning incentive peaked at higher thresholds than the threshold which resulted in maximum classification accuracy. Specificity is more important than sensitivity of neurofeedback for reinforcement learning of brain self-regulation. Learning efficiency requires adequate challenge by neurofeedback interventions. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of moisture content on wind erosion thresholds of biochar
NASA Astrophysics Data System (ADS)
Silva, F. C.; Borrego, C.; Keizer, J. J.; Amorim, J. H.; Verheijen, F. G. A.
2015-12-01
Biochar, i.e. pyrolysed biomass, as a soil conditioner is gaining increasing attention in research and industry, with guidelines and certifications being developed for biochar production, storage and handling, as well as for application to soils. Adding water to biochar aims to reduce its susceptibility to become air-borne during and after the application to soils, thereby preventing, amongst others, human health issues from inhalation. The Bagnold model has previously been modified to explain the threshold friction velocity of coal particles at different moisture contents, by adding an adhesive effect. However, it is unknown if this model also works for biochar particles. We measured the threshold friction velocities of a range of biochar particles (woody feedstock) under a range of moisture contents by using a wind tunnel, and tested the performance of the modified Bagnold model. Results showed that the threshold friction velocity can be significantly increased by keeping the gravimetric moisture content at or above 15% to promote adhesive effects between the small particles. For the specific biochar of this study, the modified Bagnold model accurately estimated threshold friction velocities of biochar particles up to moisture contents of 10%.
TU, Frank F.; EPSTEIN, Aliza E.; POZOLO, Kristen E.; SEXTON, Debra L.; MELNYK, Alexandra I.; HELLMAN, Kevin M.
2012-01-01
Objective Catheterization to measure bladder sensitivity is aversive and hinders human participation in visceral sensory research. Therefore, we sought to characterize the reliability of sonographically-estimated female bladder sensory thresholds. To demonstrate this technique’s usefulness, we examined the effects of self-reported dysmenorrhea on bladder pain thresholds. Methods Bladder sensory threshold volumes were determined during provoked natural diuresis in 49 healthy women (mean age 24 ± 8) using three-dimensional ultrasound. Cystometric thresholds (Vfs – first sensation, Vfu – first urge, Vmt – maximum tolerance) were quantified and related to bladder urgency and pain. We estimated reliability (one-week retest and interrater). Self-reported menstrual pain was examined in relationship to bladder pain, urgency and volume thresholds. Results Average bladder sensory thresholds (mLs) were Vfs (160±100), Vfu (310±130), and Vmt (500±180). Interrater reliability ranged from 0.97–0.99. One-week retest reliability was Vmt = 0.76 (95% CI 0.64–0.88), Vfs = 0.62 (95% CI 0.44–0.80), and Vfu = 0.63, (95% CI 0.47–0.80). Bladder filling rate correlated with all thresholds (r = 0.53–0.64, p < 0.0001). Women with moderate to severe dysmenorrhea pain had increased bladder pain and urgency at Vfs and increased pain at Vfu (p’s < 0.05). In contrast, dysmenorrhea pain was unrelated to bladder capacity. Discussion Sonographic estimates of bladder sensory thresholds were reproducible and reliable. In these healthy volunteers, dysmenorrhea was associated with increased bladder pain and urgency during filling but unrelated to capacity. Plausibly, dysmenorrhea sufferers may exhibit enhanced visceral mechanosensitivity, increasing their risk to develop chronic bladder pain syndromes. PMID:23370073
Hour-glass ceilings: Work-hour thresholds, gendered health inequities.
Dinh, Huong; Strazdins, Lyndall; Welsh, Jennifer
2017-03-01
Long workhours erode health, which the setting of maximum weekly hours aims to avert. This 48-h limit, and the evidence base to support it, has evolved from a workforce that was largely male, whose time in the labour force was enabled by women's domestic work and care giving. The gender composition of the workforce has now changed, and many women (as well as some men) combine care-giving with paid work, a change viewed as fundamental for gender equality. However, it raises questions on the suitability of the work time limit and the extent it is protective of health. We estimate workhour-mental health thresholds, testing if they vary for men and women due to gendered workloads and constraints on and off the job. Using six waves of data from a nationally representative sample of Australian adults (24-65 years), surveyed in the Household Income Labour Dynamics of Australia Survey (N = 3828 men; 4062 women), our study uses a longitudinal, simultaneous equation approach to address endogeneity. Averaging over the sample, we find an overall threshold of 39 h per week beyond which mental health declines. Separate curves then estimate thresholds for men and women, by high or low care and domestic time constraints, using stratified and pooled samples. We find gendered workhour-health limits (43.5 for men, 38 for women) which widen further once differences in resources on and off the job are considered. Only when time is 'unencumbered' and similar time constraints and contexts are assumed, do gender gaps narrow and thresholds approximate the 48-h limit. Our study reveals limits to contemporary workhour regulation which may be systematically disadvantaging women's health. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Impact of Climate Change on Ozone-Related Mortality in Sydney
Physick, William; Cope, Martin; Lee, Sunhee
2014-01-01
Coupled global, regional and chemical transport models are now being used with relative-risk functions to determine the impact of climate change on human health. Studies have been carried out for global and regional scales, and in our paper we examine the impact of climate change on ozone-related mortality at the local scale across an urban metropolis (Sydney, Australia). Using three coupled models, with a grid spacing of 3 km for the chemical transport model (CTM), and a mortality relative risk function of 1.0006 per 1 ppb increase in daily maximum 1-hour ozone concentration, we evaluated the change in ozone concentrations and mortality between decades 1996–2005 and 2051–2060. The global model was run with the A2 emissions scenario. As there is currently uncertainty regarding a threshold concentration below which ozone does not impact on mortality, we calculated mortality estimates for the three daily maximum 1-hr ozone concentration thresholds of 0, 25 and 40 ppb. The mortality increase for 2051–2060 ranges from 2.3% for a 0 ppb threshold to 27.3% for a 40 ppb threshold, although the numerical increases differ little. Our modeling approach is able to identify the variation in ozone-related mortality changes at a suburban scale, estimating that climate change could lead to an additional 55 to 65 deaths across Sydney in the decade 2051–2060. Interestingly, the largest increases do not correspond spatially to the largest ozone increases or the densest population centres. The distribution pattern of changes does not seem to vary with threshold value, while the magnitude only varies slightly. PMID:24419047
ERIC Educational Resources Information Center
Eaton, Karen M.; Messer, Stephen C.; Garvey Wilson, Abigail L.; Hoge, Charles W.
2006-01-01
The objectives of this study were to generate precise estimates of suicide rates in the military while controlling for factors contributing to rate variability such as demographic differences and classification bias, and to develop a simple methodology for the determination of statistically derived thresholds for detecting significant rate…
Joly, Charles-Alexandre; Péan, Vincent; Hermann, Ruben; Seldran, Fabien; Thai-Van, Hung; Truy, Eric
2017-10-01
The cochlear implant (CI) fitting level prediction accuracy of electrically-evoked compound action potential (ECAP) should be enhanced by the addition of demographic data in models. No accurate automated fitting of CI based on ECAP has yet been proposed. We recorded ECAP in 45 adults who had been using MED-EL CIs for more than 11 months and collected the most comfortable loudness level (MCL) used for CI fitting (prog-MCL), perception thresholds (meas-THR), and MCL (meas-MCL) measured with the stimulation used for ECAP recording. Linear mixed models taking into account cochlear site factors were computed to explain prog-MCL, meas-MCL, and meas-THR. Cochlear region and ECAP threshold were predictors of the three levels. In addition, significant predictors were the ECAP amplitude for the prog-MCL and the duration of deafness for the prog-MCL and the meas-THR. Estimations were more accurate for the meas-THR, then the meas-MCL, and finally the prog-MCL. These results show that 1) ECAP thresholds are more closely related to perception threshold than to comfort level, 2) predictions are more accurate when the inter-subject and cochlear regions variations are considered, and 3) differences between the stimulations used for ECAP recording and for CI fitting make it difficult to accurately predict the prog-MCL from the ECAP recording. Predicted prog-MCL could be used as bases for fitting but should be used with care to avoid any uncomfortable or painful stimulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Tim Frederik, E-mail: tim.weber@med.uni-heidelberg.d; Tetzlaff, Ralf; Rengier, Fabian
The purpose of this study was to assess the magnitude and direction of respiratory displacement of the ascending and descending thoracic aorta during breathing maneuvers. In 11 healthy nonsmokers, dynamic magnetic resonance imaging was performed in transverse orientation at the tracheal bifurcation during maximum expiration and inspiration as well as tidal breathing. The magnitude and direction of aortic displacement was determined relatively to resting respiratory position for the ascending (AA) and descending (DA) aorta. To estimate a respiratory threshold for occurrence of distinct respiratory aortic motion, the latter was related to the underlying change in anterior-posterior thorax diameter. Compound displacementmore » between maximum expiration and inspiration was 24.3 {+-} 6.0 mm for the AA in the left anterior direction and 18.2 {+-} 5.5 mm for the DA in the right anterior direction. The mean respiratory thorax excursion during tidal breathing was 8.9 {+-} 2.8 mm. The respiratory threshold, i.e., the increase in thorax diameter necessary to result in respiratory aortic displacement, was estimated to be 15.7 mm. The data suggest that after a threshold of respiratory thorax excursion is exceeded, respiration is accompanied by significant displacement of the thoracic aorta. Although this threshold may not be reached during tidal breathing in the majority of individuals, segmental differences during forced respiration impact on aortic geometry, may result in additional extrinsic forces on the aortic wall, and may be of significance for aortic prostheses designed for thoracic endovascular aortic repair.« less
Trotta-Moreu, Nuria; Lobo, Jorge M
2010-02-01
Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.
Macular pigment optical density measured by heterochromatic modulation photometry.
Huchzermeyer, Cord; Schlomberg, Juliane; Welge-Lüssen, Ulrich; Berendschot, Tos T J M; Pokorny, Joel; Kremers, Jan
2014-01-01
To psychophysically determine macular pigment optical density (MPOD) employing the heterochromatic modulation photometry (HMP) paradigm by estimating 460 nm absorption at central and peripheral retinal locations. For the HMP measurements, two lights (B: 460 nm and R: 660 nm) were presented in a test field and were modulated in counterphase at medium or high frequencies. The contrasts of the two lights were varied in tandem to determine flicker detection thresholds. Detection thresholds were measured for different R:B modulation ratios. The modulation ratio with minimal sensitivity (maximal threshold) is the point of equiluminance. Measurements were performed in 25 normal subjects (11 male, 14 female; age: 30 ± 11 years, mean ± sd) using an eight channel LED stimulator with Maxwellian view optics. The results were compared with those from two published techniques - one based on heterochromatic flicker photometry (Macular Densitometer) and the other on fundus reflectometry (MPR). We were able to estimate MPOD with HMP using a modified theoretical model that was fitted to the HMP data. The resultant MPODHMP values correlated significantly with the MPODMPR values and with the MPODHFP values obtained at 0.25° and 0.5° retinal eccentricity. HMP is a flicker-based method with measurements taken at a constant mean chromaticity and luminance. The data can be well fit by a model that allows all data points to contribute to the photometric equality estimate. Therefore, we think that HMP may be a useful method for MPOD measurements, in basic and clinical vision experiments.
Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K
2013-09-01
Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
Ozcelik, O; Kelestimur, H
2004-01-01
Anaerobic threshold which describes the onset of systematic increase in blood lactate concentration is a widely used concept in clinical and sports medicine. A deflection point between heart rate-work rate has been introduced to determine the anaerobic threshold non-invasively. However, some researchers have consistently reported a heart rate deflection at higher work rates, while others have not. The present study was designed to investigate whether the heart rate deflection point accurately predicts the anaerobic threshold under the condition of acute hypoxia. Eight untrained males performed two incremental exercise tests using an electromagnetically braked cycle ergometer: one breathing room air and one breathing 12 % O2. The anaerobic threshold was estimated using the V-slope method and determined from the increase in blood lactate and the decrease in standard bicarbonate concentration. This threshold was also estimated by in the heart rate-work rate relationship. Not all subjects exhibited a heart rate deflection. Only two subjects in the control and four subjects in the hypoxia groups showed a heart rate deflection. Additionally, the heart rate deflection point overestimated the anaerobic threshold. In conclusion, the heart rate deflection point was not an accurate predictor of anaerobic threshold and acute hypoxia did not systematically affect the heart rate-work rate relationships.
Evaluation of bone formation in calcium phosphate scaffolds with μCT-method validation using SEM.
Lewin, S; Barba, A; Persson, C; Franch, J; Ginebra, M-P; Öhman-Mägi, C
2017-10-05
There is a plethora of calcium phosphate (CaP) scaffolds used as synthetic substitutes to bone grafts. The scaffold performance is often evaluated from the quantity of bone formed within or in direct contact with the scaffold. Micro-computed tomography (μCT) allows three-dimensional evaluation of bone formation inside scaffolds. However, the almost identical x-ray attenuation of CaP and bone obtrude the separation of these phases in μCT images. Commonly, segmentation of bone in μCT images is based on gray scale intensity, with manually determined global thresholds. However, image analysis methods, and methods for manual thresholding in particular, lack standardization and may consequently suffer from subjectivity. The aim of the present study was to provide a methodological framework for addressing these issues. Bone formation in two types of CaP scaffold architectures (foamed and robocast), obtained from a larger animal study (a 12 week canine animal model) was evaluated by μCT. In addition, cross-sectional scanning electron microscopy (SEM) images were acquired as references to determine thresholds and to validate the result. μCT datasets were registered to the corresponding SEM reference. Global thresholds were then determined by quantitatively correlating the different area fractions in the μCT image, towards the area fractions in the corresponding SEM image. For comparison, area fractions were also quantified using global thresholds determined manually by two different approaches. In the validation the manually determined thresholds resulted in large average errors in area fraction (up to 17%), whereas for the evaluation using SEM references, the errors were estimated to be less than 3%. Furthermore, it was found that basing the thresholds on one single SEM reference gave lower errors than determining them manually. This study provides an objective, robust and less error prone method to determine global thresholds for the evaluation of bone formation in CaP scaffolds.
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
Tolerance for High Flavanol Cocoa Powder in Semisweet Chocolate
Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.
2013-01-01
Endogenous polyphenolic compounds in cacao impart both bitter and astringent characteristics to chocolate confections. While an increase in these compounds may be desirable from a health perspective, they are generally incongruent with consumer expectations. Traditionally, chocolate products undergo several processing steps (e.g., fermentation and roasting) that decrease polyphenol content, and thus bitterness. The objective of this study was to estimate group rejection thresholds for increased content of cocoa powder produced from under-fermented cocoa beans in a semisweet chocolate-type confection. The group rejection threshold was equivalent to 80.7% of the non-fat cocoa solids coming from the under-fermented cocoa powder. Contrary to expectations, there were no differences in rejection thresholds when participants were grouped based on their self-reported preference for milk or dark chocolate, indicating that these groups react similarly to an increase in high cocoa flavanol containing cocoa powder. PMID:23792967
Tolerance for high flavanol cocoa powder in semisweet chocolate.
Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E
2013-06-21
Endogenous polyphenolic compounds in cacao impart both bitter and astringent characteristics to chocolate confections. While an increase in these compounds may be desirable from a health perspective, they are generally incongruent with consumer expectations. Traditionally, chocolate products undergo several processing steps (e.g., fermentation and roasting) that decrease polyphenol content, and thus bitterness. The objective of this study was to estimate group rejection thresholds for increased content of cocoa powder produced from under-fermented cocoa beans in a semisweet chocolate-type confection. The group rejection threshold was equivalent to 80.7% of the non-fat cocoa solids coming from the under-fermented cocoa powder. Contrary to expectations, there were no differences in rejection thresholds when participants were grouped based on their self-reported preference for milk or dark chocolate, indicating that these groups react similarly to an increase in high cocoa flavanol containing cocoa powder.
Evaluation of a new model of aeolian transport in the presence of vegetation
Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Miller, Mark E.; Vest, Kimberly; Draut, Amy E.
2013-01-01
Aeolian transport is an important characteristic of many arid and semiarid regions worldwide that affects dust emission and ecosystem processes. The purpose of this paper is to evaluate a recent model of aeolian transport in the presence of vegetation. This approach differs from previous models by accounting for how vegetation affects the distribution of shear velocity on the surface rather than merely calculating the average effect of vegetation on surface shear velocity or simply using empirical relationships. Vegetation, soil, and meteorological data at 65 field sites with measurements of horizontal aeolian flux were collected from the Western United States. Measured fluxes were tested against modeled values to evaluate model performance, to obtain a set of optimum model parameters, and to estimate the uncertainty in these parameters. The same field data were used to model horizontal aeolian flux using three other schemes. Our results show that the model can predict horizontal aeolian flux with an approximate relative error of 2.1 and that further empirical corrections can reduce the approximate relative error to 1.0. The level of error is within what would be expected given uncertainties in threshold shear velocity and wind speed at our sites. The model outperforms the alternative schemes both in terms of approximate relative error and the number of sites at which threshold shear velocity was exceeded. These results lend support to an understanding of the physics of aeolian transport in which (1) vegetation's impact on transport is dependent upon the distribution of vegetation rather than merely its average lateral cover and (2) vegetation impacts surface shear stress locally by depressing it in the immediate lee of plants rather than by changing the bulk surface's threshold shear velocity. Our results also suggest that threshold shear velocity is exceeded more than might be estimated by single measurements of threshold shear stress and roughness length commonly associated with vegetated surfaces, highlighting the variation of threshold shear velocity with space and time in real landscapes.
Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C
2012-02-01
It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).
Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers
Situ, Ping; Simpson, Trefford; Begley, Carolyn
2016-01-01
Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090
Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó
2017-08-01
Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.
Barbieri, Marco; Drummond, Michael; Willke, Richard; Chancellor, Jeremy; Jolain, Bruno; Towse, Adrian
2005-01-01
It has long been suggested that, whereas the results of clinical studies of pharmaceuticals are generalizable from one jurisdiction to another, the results of economic evaluations are location dependent. There has been, however, little study of the causes of variation, whether differences in study results among countries are systematic, or whether they are important for decision making. A literature search was conducted to identify economic evaluations of pharmaceuticals conducted in two or more European countries. The studies identified were then classified by methodological type and analyzed to assess their level of variability and to identify the main causes of variation. Assessments were also made of the extent to which differences in study results among countries were systematic and whether they would lead to a different decision, assuming a range of values of the threshold willingness-to-pay for a life-year or quality-adjusted life-year (QALY). In total 46 intercountry drug comparisons were identified, 29 in multicountry studies and 17 in comparable single country studies that were considered to be sufficiently similar in terms of methodology. The type of study (i.e., trial-based or modeling study) had some impact on variability, but the most important factor was the extent of variation across countries in effectiveness, resource use or unit costs, allowed by the researcher's chosen methodology. There were few systematic differences in study results among countries, so a decision maker in country B, on seeing a recent economic evaluation of a new drug in country A, would have little basis on which to predict whether the drug, if evaluated, would be more or less cost-effective in his or her country. Given the extent of variation in cost-effectiveness estimates among countries, the importance of this for decision making depends on decision makers' thresholds in willingness-to-pay for a QALY or life-year. If a cost-effectiveness threshold (i.e., willingness-to-pay) for a life-year or QALY of dollar 50,000 were assumed, the same conclusion regarding cost-effectiveness would be reached in most cases. This review shows that cost-effectiveness results for pharmaceuticals vary from country to country in Western Europe and that these variations are not systematic. In addition, constraints imposed by analysts may reduce apparent variability in the estimates. The lessons for inferring generalizability are not straightforward, although the implications of variation for decision making depend critically on the cost-effectiveness thresholds applying in Western Europe.
van der Maas, Nico Arie
2017-03-16
The Multiple Sclerosis Questionnaire for Physical Therapists (MSQPT) is a patient-rated outcome questionnaire for evaluating the rehabilitation of persons with multiple sclerosis (MS). Responsiveness was evaluated, and minimal important difference (MID) estimates were calculated to provide thresholds for clinical change for four items, three sections and the total score of the MSQPT. This multicentre study used a combined distribution- and anchor-based approach with multiple anchors and multiple rating of change questions. Responsiveness was evaluated using effect size, standardized response mean (SRM), modified SRM and relative efficiency. For distribution-based MID estimates, 0.2 and 0.33 standard deviations (SD), standard error of measurement (SEM) and minimal detectable change were used . Triangulation of anchor- and distribution-based MID estimates provided a range of MID values for each of the four items, the three sections and the total score of the MSQPT. The MID values were tested for their sensitivity and specificity for amelioration and deterioration for each of the four items, the three sections and the total score of the MSQPT. The MID values of each item and section and of the total score with the best sensitivity and specificity were selected as thresholds for clinical change. The outcome measures were the MSQPT, Hamburg Quality of Life Questionnaire for Multiple Sclerosis (HAQUAMS), rating of change questionnaires, Expanded Disability Status Scale, 6-metre timed walking test, Berg Balance Scale and 6-minute walking test. The effect size ranged from 0.46 to 1.49. The SRM data showed comparable results. The modified SRM ranged from 0.00 to 0.60. Anchor-based MID estimates were very low and were comparable with SD- and SEM-based estimates. The MSQPT was more responsive than the HAQUAMS in detecting improvement but less responsive in finding deterioration. The best MID estimates of the items, sections and total score, expressed in percentage of their maximum score, were between 5.4% (activity) and 22% (item 10) change for improvement and between 5.7% (total score) and 22% (item 10) change for deterioration. The MSQPT is a responsive questionnaire with an adequate MID that may be used as threshold for change during rehabilitation of MS patients. This trial was retrospectively (01/24/2015) registered in ClinicalTrials.gov as NCT02346279.
AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images
Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.
2017-01-01
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.
Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM
Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping
2015-01-01
Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090
Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.
2013-01-01
The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained evidence that once the intelligence threshold is met, personality factors become more predictive for creativity. On the contrary, no threshold was found for creative achievement, i.e. creative achievement benefits from higher intelligence even at fairly high levels of intellectual ability. PMID:23825884
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).
Minimized-Laplacian residual interpolation for color image demosaicking
NASA Astrophysics Data System (ADS)
Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi
2014-03-01
A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.
Observations Regarding Scatter Fraction and NEC Measurements for Small Animal PET
NASA Astrophysics Data System (ADS)
Yang, Yongfeng; Cherry, S. R.
2006-02-01
The goal of this study was to evaluate the magnitude and origin of scattered radiation in a small-animal PET scanner and to assess the impact of these findings on noise equivalent count rate (NECR) measurements, a metric often used to optimize scanner acquisition parameters and to compare one scanner with another. The scatter fraction (SF) was measured for line sources in air and line sources placed within a mouse-sized phantom (25 mm /spl phi//spl times/70 mm) and a rat-sized phantom (60 mm /spl phi//spl times/150 mm) on the microPET II small-animal PET scanner. Measurements were performed for lower energy thresholds ranging from 150-450 keV and a fixed upper energy threshold of 750 keV. Four different methods were compared for estimating the SF. Significant scatter fractions were measured with just the line source in the field of view, with the spatial distribution of these events consistent with scatter from the gantry and room environment. For mouse imaging, this component dominates over object scatter, and the measured SF is strongly method dependent. The environmental SF rapidly increases as the lower energy threshold decreases and can be more than 30% for an open energy window of 150-750 keV. The object SF originating from the mouse phantom is about 3-4% and does not change significantly as the lower energy threshold increases. The object SF for the rat phantom ranges from 10 to 35% for different energy windows and increases as the lower energy threshold decreases. Because the measured SF is highly dependent on the method, and there is as yet no agreed upon standard for animal PET, care must be exercised when comparing NECR for small objects between different scanners. Differences may be methodological rather than reflecting any relevant difference in the performance of the scanner. Furthermore, these results have implications for scatter correction methods when the majority of the detected scatter does not arise from the object itself.
The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal
NASA Astrophysics Data System (ADS)
Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis
2016-08-01
The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.
NASA Astrophysics Data System (ADS)
Ning, Jianguo; Wang, Jun; Jiang, Jinquan; Hu, Shanchao; Jiang, Lishuai; Liu, Xuesheng
2018-01-01
A new energy-dissipation method to identify crack initiation and propagation thresholds is introduced. Conventional and cyclic loading-unloading triaxial compression tests and acoustic emission experiments were performed for coal specimens from a 980-m deep mine with different confining pressures of 10, 15, 20, 25, 30, and 35 MPa. Stress-strain relations, acoustic emission patterns, and energy evolution characteristics obtained during the triaxial compression tests were analyzed. The majority of the input energy stored in the coal specimens took the form of elastic strain energy. After the elastic-deformation stage, part of the input energy was consumed by stable crack propagation. However, with an increase in stress levels, unstable crack propagation commenced, and the energy dissipation and coal damage were accelerated. The variation in the pre-peak energy-dissipation ratio was consistent with the coal damage. This new method demonstrates that the crack initiation threshold was proportional to the peak stress ( σ p) for ratios that ranged from 0.4351 to 0.4753 σ p, and the crack damage threshold ranged from 0.8087 to 0.8677 σ p.
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
Threshold-based system for noise detection in multilead ECG recordings.
Jekova, Irena; Krasteva, Vessela; Christov, Ivaylo; Abächerli, Roger
2012-09-01
This paper presents a system for detection of the most common noise types seen on the electrocardiogram (ECG) in order to evaluate whether an episode from 12-lead ECG is reliable for diagnosis. It implements criteria for estimation of the noise corruption level in specific frequency bands, aiming to identify the main sources of ECG quality disruption, such as missing signal or limited dynamics of the QRS components above 4 Hz; presence of high amplitude and steep artifacts seen above 1 Hz; baseline drift estimated at frequencies below 1 Hz; power-line interference in a band ±2 Hz around its central frequency; high-frequency and electromyographic noises above 20 Hz. All noise tests are designed to process the ECG series in the time domain, including 13 adjustable thresholds for amplitude and slope criteria which are evaluated in adjustable time intervals, as well as number of leads. The system allows flexible extension toward application-specific requirements for the noise levels in acceptable quality ECGs. Training of different thresholds' settings to determine different positive noise detection rates is performed with the annotated set of 1000 ECGs from the PhysioNet database created for the Computing in Cardiology Challenge 2011. Two implementations are highlighted on the receiver operating characteristic (area 0.968) to fit to different applications. The implementation with high sensitivity (Se = 98.7%, Sp = 80.9%) appears as a reliable alarm when there are any incidental problems with the ECG acquisition, while the implementation with high specificity (Sp = 97.8%, Se = 81.8%) is less susceptible to transient problems but rather validates noisy ECGs with acceptable quality during a small portion of the recording.
Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran
2009-10-01
Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.
On the degrees of freedom of reduced-rank estimators in multivariate regression
Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.
2015-01-01
Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Li, S; Oreffo, ROC; Sengers, BG; Tare, RS
2014-01-01
Significant oxygen gradients occur within tissue engineered cartilaginous constructs. Although oxygen tension is an important limiting parameter in the development of new cartilage matrix, its precise role in matrix formation by chondrocytes remains controversial, primarily due to discrepancies in the experimental setup applied in different studies. In this study, the specific effects of oxygen tension on the synthesis of cartilaginous matrix by human articular chondrocytes were studied using a combined experimental-computational approach in a “scaffold-free” 3D pellet culture model. Key parameters including cellular oxygen uptake rate were determined experimentally and used in conjunction with a mathematical model to estimate oxygen tension profiles in 21-day cartilaginous pellets. A threshold oxygen tension (pO2 ≈ 8% atmospheric pressure) for human articular chondrocytes was estimated from these inferred oxygen profiles and histological analysis of pellet sections. Human articular chondrocytes that experienced oxygen tension below this threshold demonstrated enhanced proteoglycan deposition. Conversely, oxygen tension higher than the threshold favored collagen synthesis. This study has demonstrated a close relationship between oxygen tension and matrix synthesis by human articular chondrocytes in a “scaffold-free” 3D pellet culture model, providing valuable insight into the understanding and optimization of cartilage bioengineering approaches. Biotechnol. Bioeng. 2014;111: 1876–1885. PMID:24668194
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael
2015-01-01
To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.
Temporal clustering of floods in Germany: Do flood-rich and flood-poor periods exist?
NASA Astrophysics Data System (ADS)
Merz, Bruno; Nguyen, Viet Dung; Vorogushyn, Sergiy
2016-10-01
The repeated occurrence of exceptional floods within a few years, such as the Rhine floods in 1993 and 1995 and the Elbe and Danube floods in 2002 and 2013, suggests that floods in Central Europe may be organized in flood-rich and flood-poor periods. This hypothesis is studied by testing the significance of temporal clustering in flood occurrence (peak-over-threshold) time series for 68 catchments across Germany for the period 1932-2005. To assess the robustness of the results, different methods are used: Firstly, the index of dispersion, which quantifies the departure from a homogeneous Poisson process, is investigated. Further, the time-variation of the flood occurrence rate is derived by non-parametric kernel implementation and the significance of clustering is evaluated via parametric and non-parametric tests. Although the methods give consistent overall results, the specific results differ considerably. Hence, we recommend applying different methods when investigating flood clustering. For flood estimation and risk management, it is of relevance to understand whether clustering changes with flood severity and time scale. To this end, clustering is assessed for different thresholds and time scales. It is found that the majority of catchments show temporal clustering at the 5% significance level for low thresholds and time scales of one to a few years. However, clustering decreases substantially with increasing threshold and time scale. We hypothesize that flood clustering in Germany is mainly caused by catchment memory effects along with intra- to inter-annual climate variability, and that decadal climate variability plays a minor role.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
Spatial layout affects speed discrimination
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1997-01-01
We address a surprising result in a previous study of speed discrimination with multiple moving gratings: discrimination thresholds decreased when the number of stimuli was increased, but remained unchanged when the area of a single stimulus was increased [Verghese & Stone (1995). Vision Research, 35, 2811-2823]. In this study, we manipulated the spatial- and phase relationship between multiple grating patches to determine their effect on speed discrimination thresholds. In a fusion experiment, we merged multiple stimulus patches, in stages, into a single patch. Thresholds increased as the patches were brought closer and their phase relationship was adjusted to be consistent with a single patch. Thresholds increased further still as these patches were fused into a single patch. In a fission experiment, we divided a single large patch into multiple patches by superimposing a cross with luminance equal to that of the background. Thresholds decreased as the large patch was divided into quadrants and decreased further as the quadrants were maximally separated. However, when the cross luminance was darker than the background, it was perceived as an occluder and thresholds, on average, were unchanged from that for the single large patch. A control experiment shows that the observed trend in discrimination thresholds is not due to the differences in perceived speed of the stimuli. These results suggest that the parsing of the visual image into entities affects the combination of speed information across space, and that each discrete entity effectively provides a single independent estimate of speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, Daniel M.
2006-01-15
We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated onmore » a single graph. Response rates from {approx}1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate.« less
Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology
NASA Astrophysics Data System (ADS)
Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.
2015-01-01
The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Oil-in-Water Emulsion Exhibits Bitterness-Suppressing Effects in a Sensory Threshold Study.
Torrico, Damir Dennis; Sae-Eaw, Amporn; Sriwattana, Sujinda; Boeneke, Charles; Prinyawiwatkul, Witoon
2015-06-01
Little is known about how emulsion characteristics affect saltiness/bitterness perception. Sensory detection and recognition thresholds of NaCl, caffeine, and KCl in aqueous solution compared with oil-in-water emulsion systems were evaluated. For emulsions, NaCl, KCl, or caffeine were dissolved in water + emulsifier and mixed with canola oil (20% by weight). Two emulsions were prepared: emulsion 1 (viscosity = 257 cP) and emulsion 2 (viscosity = 59 cP). The forced-choice ascending concentration series method of limits (ASTM E-679-04) was used to determine detection and/or recognition thresholds at 25 °C. Group best estimate threshold (GBET) geometric means were expressed as g/100 mL. Comparing NaCl with KCl, there were no significant differences in detection GBET values for all systems (0.0197 - 0.0354). For saltiness recognition thresholds, KCl GBET values were higher compared with NaCl GBET (0.0822 - 0.1070 compared with 0.0471 - 0.0501). For NaCl and KCl, emulsion 1 and/or emulsion 2 did not significantly affect the saltiness recognition threshold compared with that of the aqueous solution. However, the bitterness recognition thresholds of caffeine and KCl in solution were significantly lower than in the emulsions (0.0242 - 0.0586 compared with 0.0754 - 0.1025). Gender generally had a marginal effect on threshold values. This study showed that, compared with the aqueous solutions, emulsions did not significantly affect the saltiness recognition threshold of NaCl and KCl, but exhibited bitterness-suppressing effects on KCl and/or caffeine. © 2015 Institute of Food Technologists®
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Gender differences in developmental dyscalculia depend on diagnostic criteria.
Devine, Amy; Soltész, Fruzsina; Nobes, Alison; Goswami, Usha; Szűcs, Dénes
2013-10-01
Developmental dyscalculia (DD) is a learning difficulty specific to mathematics learning. The prevalence of DD may be equivalent to that of dyslexia, posing an important challenge for effective educational provision. Nevertheless, there is no agreed definition of DD and there are controversies surrounding cutoff decisions, specificity and gender differences. In the current study, 1004 British primary school children completed mathematics and reading assessments. The prevalence of DD and gender ratio were estimated in this sample using different criteria. When using absolute thresholds, the prevalence of DD was the same for both genders regardless of the cutoff criteria applied, however gender differences emerged when using a mathematics-reading discrepancy definition. Correlations between mathematics performance and the control measures selected to identify a specific learning difficulty affect both prevalence estimates and whether a gender difference is in fact identified. Educational implications are discussed.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
William H. Cooke; Dennis M. Jacobs
2002-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land useiland cover information....
USDA-ARS?s Scientific Manuscript database
The calculation of a thermal based Crop Water Stress Index (CWSI) requires an estimate of canopy temperature under non-water stressed conditions. The objective of this study was to assess the influence of different wine grape cultivars on the performance of models that predict canopy temperature non...
De Loof, Esther; Van Opstal, Filip; Verguts, Tom
2016-04-01
Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.
2018-05-01
The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.
Baird, Katherine E
2016-01-01
Background: This article compares the burden that medical cost-sharing requirements place on households in the United States and Canada. It estimates the probability that individuals with similar demographic features in the two countries have large medical expenses relative to income. Method: The study uses 2010 nationally representative household survey data harmonized for cross-national comparisons to identify individuals with high medical expenses relative to income. Using logistic regression, it estimates the probability of high expenses occurring among 10 different demographic groups in the two countries. Results: The results show the risk of large medical expenses in the United States is 1.5–4 times higher than it is in Canada, depending on the demographic group and spending threshold used. The United States compares least favorably when evaluating poorer citizens and when using a higher spending threshold. Conclusion: Recent health care reforms can be expected to reduce Americans’ catastrophic health expenses, but it will take very large reductions in out-of-pocket expenditures—larger than can be expected—if poorer and middle-class families are to have the financial protection from high health care costs that their counterparts in Canada have. PMID:26985389
Online Sensor Fault Detection Based on an Improved Strong Tracking Filter
Wang, Lijuan; Wu, Lifeng; Guan, Yong; Wang, Guohui
2015-01-01
We propose a method for online sensor fault detection that is based on the evolving Strong Tracking Filter (STCKF). The cubature rule is used to estimate states to improve the accuracy of making estimates in a nonlinear case. A residual is the difference in value between an estimated value and the true value. A residual will be regarded as a signal that includes fault information. The threshold is set at a reasonable level, and will be compared with residuals to determine whether or not the sensor is faulty. The proposed method requires only a nominal plant model and uses STCKF to estimate the original state vector. The effectiveness of the algorithm is verified by simulation on a drum-boiler model. PMID:25690553
Jit, Mark; Bilcke, Joke; Mangen, Marie-Josée J; Salo, Heini; Melliez, Hugues; Edmunds, W John; Yazdan, Yazdanpanah; Beutels, Philippe
2009-10-19
Cost-effectiveness analyses are usually not directly comparable between countries because of differences in analytical and modelling assumptions. We investigated the cost-effectiveness of rotavirus vaccination in five European Union countries (Belgium, England and Wales, Finland, France and the Netherlands) using a single model, burden of disease estimates supplied by national public health agencies and a subset of common assumptions. Under base case assumptions (vaccination with Rotarix, 3% discount rate, health care provider perspective, no herd immunity and quality of life of one caregiver affected by a rotavirus episode) and a cost-effectiveness threshold of euro30,000, vaccination is likely to be cost effective in Finland only. However, single changes to assumptions may make it cost effective in Belgium and the Netherlands. The estimated threshold price per dose for Rotarix (excluding administration costs) to be cost effective was euro41 in Belgium, euro28 in England and Wales, euro51 in Finland, euro36 in France and euro46 in the Netherlands.
Constraints on the FRB rate at 700-900 MHz
NASA Astrophysics Data System (ADS)
Connor, Liam; Lin, Hsiu-Hsien; Masui, Kiyoshi; Oppermann, Niels; Pen, Ue-Li; Peterson, Jeffrey B.; Roman, Alexander; Sievers, Jonathan
2016-07-01
Estimating the all-sky rate of fast radio bursts (FRBs) has been difficult due to small-number statistics and the fact that they are seen by disparate surveys in different regions of the sky. In this paper we provide limits for the FRB rate at 800 MHz based on the only burst detected at frequencies below 1.4 GHz, FRB 110523. We discuss the difficulties in rate estimation, particularly in providing an all-sky rate above a single fluence threshold. We find an implied rate between 700 and 900 MHz that is consistent with the rate at 1.4 GHz, scaling to 6.4^{+29.5}_{-5.0} × 10^3 sky-1 d-1 for an HTRU-like survey. This is promising for upcoming experiments below a GHz like CHIME and UTMOST, for which we forecast detection rates. Given 110523's discovery at 32σ with nothing weaker detected, down to the threshold of 8σ, we find consistency with a Euclidean flux distribution but disfavour steep distributions, ruling out γ > 2.2.
Constraining Night Time Ecosystem Respiration by Inverse Approaches
NASA Astrophysics Data System (ADS)
Juang, J.; Stoy, P. C.; Siqueira, M. B.; Katul, G. G.
2004-12-01
Estimating nighttime ecosystem respiration remains a key challenge in quantifying ecosystem carbon budgets. Currently, nighttime eddy-covariance (EC) flux measurements are plagued by uncertainties often attributed to poor mixing within the canopy volume, non-turbulent transport of CO2 into and out of the canopy, and non-stationarity and intermittency. Here, we explore the use of second-order closure models to estimate nighttime ecosystem respiration by mathematically linking sources of CO2 to mean concentration profiles via the continuity and the CO2 flux budget equation modified to include thermal stratification. By forcing this model to match, in a root-mean squared sense, the nighttime measured mean CO2 concentration profiles within the canopy the above ground CO2 production and forest floor respiration can be estimated via multi-dimensional optimization techniques. We show that in a maturing pine and a mature hardwood forest, these optimized CO2 sources are (1) consistently larger than the eddy covariance flux measurements above the canopy, and (2) agree well with chamber-based measurements. We also show that by linking the optimized nighttime ecosystem respiration to temperature measurements, the estimated annual ecosystem respiration from this approach agrees well with biometric estimates, at least when compared to eddy-covariance methods conditioned on a friction velocity threshold. The difference between the annual ecosystem respiration obtained by this optimization method and the friction-velocity thresholded night-time EC fluxes can be as large as 700 g C m-2 (in 2003) for the maturing pine forest, which is about 40% of the ecosystem respiration. For 2001 and 2002, the annual ecosystem respiration differences between the EC-based and the proposed approach were on the order of 300 to 400 g C m-2.
Spatial Interpolation of Rain-field Dynamic Time-Space Evolution in Hong Kong
NASA Astrophysics Data System (ADS)
Liu, P.; Tung, Y. K.
2017-12-01
Accurate and reliable measurement and prediction of spatial and temporal distribution of rain-field over a wide range of scales are important topics in hydrologic investigations. In this study, geostatistical treatment of precipitation field is adopted. To estimate the rainfall intensity over a study domain with the sample values and the spatial structure from the radar data, the cumulative distribution functions (CDFs) at all unsampled locations were estimated. Indicator Kriging (IK) was used to estimate the exceedance probabilities for different pre-selected cutoff levels and a procedure was implemented for interpolating CDF values between the thresholds that were derived from the IK. Different interpolation schemes of the CDF were proposed and their influences on the performance were also investigated. The performance measures and visual comparison between the observed rain-field and the IK-based estimation suggested that the proposed method can provide fine results of estimation of indicator variables and is capable of producing realistic image.
Eaton, James; Mealing, Stuart; Thompson, Juliette; Moat, Neil; Kappetein, Pieter; Piazza, Nicolo; Busca, Rachele; Osnabrugge, Ruben
2014-05-01
Health Technology Assessment (HTA) agencies often undertake a review of economic evaluations of an intervention during an appraisal in order to identify published estimates of cost-effectiveness, to elicit comparisons with the results of their own model, and to support local reimbursement decision-making. The aim of this research is to determine whether Transcatheter Aortic Valve Implantation (TAVI) compared to medical management (MM) is cost-effective in patients ineligible for surgical aortic valve replacement (SAVR), across different jurisdictions and country-specific evaluations. A systematic review of the literature from 2007-2012 was performed in the MEDLINE, MEDLINE in-process, EMBASE, and UK NHS EED databases according to standard methods, supplemented by a search of published HTA models. All identified publications were reviewed independently by two health economists. The British Medical Journal (BMJ) 35-point checklist for economic evaluations was used to assess study reporting. To compare results, incremental cost effectiveness ratios (ICERs) were converted to 2012 dollars using purchasing power parity (PPP) techniques. Six studies were identified representing five reimbursement jurisdictions (England/Wales, Scotland, the US, Canada, and Belgium) and different modeling techniques. The identified economic evaluations represent different willingness-to-pay thresholds, discount rates, medical costs, and healthcare systems. In addition, the model structures, time horizons, and cycle lengths varied. When adjusting for differences in currencies, the ICERs ranged from $27K-$65K per QALY gained. Despite notable differences in modeling approach, under the thresholds defined by using either the local threshold value or that recommended by the World Health Organization (WHO) threshold value, each study showed that TAVI was likely to be a cost-effective intervention for patients ineligible for SAVR.
Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J
2016-07-01
To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)
2001-01-01
This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Rasch Analyses of Very Low Food Security among Households and Children in the Three City Study*
Moffitt, Robert A.; Ribar, David C.
2017-01-01
The longitudinal Three City Study of low-income families with children measures food hardships using fewer questions and some different questions from the standard U.S. instrument for measuring food security, the Household Food Security Survey Module (HFSSM) in the Current Population Survey (CPS). We utilize a Rasch measurement model to identify thresholds of very low food security among households and very low food security among children in the Three City Study that are comparable to thresholds from the HFSSM. We also use the Three City Study to empirically investigate the determinants of food insecurity and of these specific food insecurity outcomes, estimating a multivariate behavioral Rasch model that is adapted to address longitudinal data. The estimation results indicate that participation in the Supplemental Nutrition Assistance Program and the Temporary Assistance for Needy Families program reduce food insecurity, while poverty and disability among caregivers increase it. Besides its longitudinal structure, the Three City Study measures many more characteristics about households than the CPS. Our estimates reveal that financial assistance through social networks and a household's own financial assets reduce food insecurity, while its outstanding loans increase insecurity. PMID:29187764
Rasch Analyses of Very Low Food Security among Households and Children in the Three City Study.
Moffitt, Robert A; Ribar, David C
2016-04-01
The longitudinal Three City Study of low-income families with children measures food hardships using fewer questions and some different questions from the standard U.S. instrument for measuring food security, the Household Food Security Survey Module (HFSSM) in the Current Population Survey (CPS). We utilize a Rasch measurement model to identify thresholds of very low food security among households and very low food security among children in the Three City Study that are comparable to thresholds from the HFSSM. We also use the Three City Study to empirically investigate the determinants of food insecurity and of these specific food insecurity outcomes, estimating a multivariate behavioral Rasch model that is adapted to address longitudinal data. The estimation results indicate that participation in the Supplemental Nutrition Assistance Program and the Temporary Assistance for Needy Families program reduce food insecurity, while poverty and disability among caregivers increase it. Besides its longitudinal structure, the Three City Study measures many more characteristics about households than the CPS. Our estimates reveal that financial assistance through social networks and a household's own financial assets reduce food insecurity, while its outstanding loans increase insecurity.
Kastelein, Ronald A; Hoek, Lean; de Jong, Christ A F
2011-08-01
Helicopter long range active sonar (HELRAS), a "dipping" sonar system used by lowering transducer and receiver arrays into water from helicopters, produces signals within the functional hearing range of many marine animals, including the harbor porpoise. The distance at which the signals can be heard is unknown, and depends, among other factors, on the hearing sensitivity of the species to these particular signals. Therefore, the hearing thresholds of a harbor porpoise for HELRAS signals were quantified by means of a psychophysical technique. Detection thresholds were obtained for five 1.25 s simulated HELRAS signals, varying in their harmonic content and amplitude envelopes. The 50% hearing thresholds for the different signals were similar: 76 dB re 1 μPa (broadband sound pressure level, averaged over the signal duration). The detection thresholds were similar to those found in the same porpoise for tonal signals in the 1-2 kHz range measured in a previous study. Harmonic distortion, which occurred in three of the five signals, had little influence on their audibility. The results of this study, combined with information on the source level of the signal, the propagation conditions and ambient noise levels, allow the calculation of accurate estimates of the distances at which porpoises can detect HELRAS signals.
Threshold groundwater ages and young water fractions estimated from 3H, 3He, and 14C
NASA Astrophysics Data System (ADS)
Kirchner, James; Jasechko, Scott
2016-04-01
It is widely recognized that a water sample taken from a running stream is not described by a single age, but rather by a distribution of ages. It is less widely recognized that the same principle holds true for groundwaters, as indicated by the commonly observed discordances between model ages obtained from different tracers (e.g., 3H vs 14C) in the same sample. Water age distributions are often characterized by their mean residence times (MRT's). However, MRT estimates are highly uncertain because they depend on the shape of the assumed residence time distribution (in particular on the thickness of the long-time tail), which is difficult or impossible to constrain with data. Furthermore, because MRT's are typically nonlinear functions of age tracer concentrations, they are subject to aggregation bias. That is, MRT estimates derived from a mixture of waters with different ages (and thus different tracer concentrations) will systematically underestimate the mixture's true mean age. Here, building on recent work with stable isotope tracers in surface waters [1-3], we present a new framework for using 3H, 3He and 14C to characterize groundwater age distributions. Rather than describing groundwater age distributions by their MRT, we characterize them by the fraction of the distribution that is younger or older than a threshold age. The threshold age that separates "young" from "old" water depends on the characteristics of the specific tracer, including its history of atmospheric inputs. Our approach depends only on whether a given slice of the age distribution is younger or older than the threshold age, but not on how much younger or older it is. Thus our approach is insensitive to the tails of the age distribution, and is therefore relatively unaffected by uncertainty in the distribution's shape. Here we show that concentrations of 3H, 3He, and 14C are almost linearly related to the fractions of water that are younger or older than specified threshold ages. These "young" and "old" water fractions are therefore immune to the aggregation bias that afflicts MRT estimates. They are also relatively insensitive to the shape of the assumed residence time distribution. We apply this approach to 3H and 14C measurements from ˜5000 wells in ˜200 aquifers around the world. Our results show that even very old groundwaters, with 14C ages of thousands of years, often contain significant amounts of much younger water, with a substantial fraction of their age distributions younger than ˜100 years old. Thus despite being very old on average, these groundwaters may also be vulnerable to relatively recent contamination. [1] Kirchner J.W., Aggregation in environmental systems: Catchment mean transit times and young water fractions under hydrologic nonstationarity, Hydrology and Earth System Sciences, in press. [2] Kirchner J.W., Aggregation in environmental systems: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments, Hydrology and Earth System Sciences, in press. [3] Jasechko S., Kirchner J.W., Welker J.M., and McDonnell J.J., Substantial young streamflow in global rivers, Nature Geoscience, in press.
NASA Astrophysics Data System (ADS)
Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo
2017-06-01
Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.
The concentration-response relation between air pollution and daily deaths.
Schwartz, J; Ballester, F; Saez, M; Pérez-Hoyos, S; Bellido, J; Cambra, K; Arribas, F; Cañada, A; Pérez-Boillos, M J; Sunyer, J
2001-01-01
Studies on three continents have reported associations between various measures of airborne particles and daily deaths. Sulfur dioxide has also been associated with daily deaths, particularly in Europe. Questions remain about the shape of those associations, particularly whether there are thresholds at low levels. We examined the association of daily concentrations of black smoke and SO(2) with daily deaths in eight Spanish cities (Barcelona, Bilbao, Castellón, Gijón, Oviedo, Valencia, Vitoria, and Zaragoza) with different climates and different environmental and social characteristics. We used nonparametric smoothing to estimate the shape of the concentration-response curve in each city and combined those results using a metasmoothing technique developed by Schwartz and Zanobetti. We extended their method to incorporate random variance components. Black smoke had a nearly linear association with daily deaths, with no evidence of a threshold. A 10 microg/m(3) increase in black smoke was associated with a 0.88% increase in daily deaths (95% confidence interval, 0.56%-1.20%). SO(2) had a less plausible association: Daily deaths increased at very low concentrations, but leveled off and then decreased at higher concentrations. These findings held in both one- and two-pollutant models and held whether we optimized our weather and seasonal model in each city or used the same smoothing parameters in each city. We conclude that the association with particle levels is more convincing than for SO(2), and without a threshold. Linear models provide an adequate estimation of the effect of particulate air pollution on mortality at low to moderate concentrations. PMID:11675264
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Projectile penetration into ballistic gelatin.
Swain, M V; Kieser, D C; Shah, S; Kieser, J A
2014-01-01
Ballistic gelatin is frequently used as a model for soft biological tissues that experience projectile impact. In this paper we investigate the response of a number of gelatin materials to the penetration of spherical steel projectiles (7 to 11mm diameter) with a range of lower impacting velocities (<120m/s). The results of sphere penetration depth versus projectile velocity are found to be linear for all systems above a certain threshold velocity required for initiating penetration. The data for a specific material impacted with different diameter spheres were able to be condensed to a single curve when the penetration depth was normalised by the projectile diameter. When the results are compared with a number of predictive relationships available in the literature, it is found that over the range of projectiles and compositions used, the results fit a simple relationship that takes into account the projectile diameter, the threshold velocity for penetration into the gelatin and a value of the shear modulus of the gelatin estimated from the threshold velocity for penetration. The normalised depth is found to fit the elastic Froude number when this is modified to allow for a threshold impact velocity. The normalised penetration data are found to best fit this modified elastic Froude number with a slope of 1/2 instead of 1/3 as suggested by Akers and Belmonte (2006). Possible explanations for this difference are discussed. © 2013 Published by Elsevier Ltd.
Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J
2005-12-01
We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.
Visuomotor sensitivity to visual information about surface orientation.
Knill, David C; Kersten, Daniel
2004-03-01
We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.
Vandeleur, C L; Rothen, S; Lustenberger, Y; Glaus, J; Castelao, E; Preisig, M
2015-01-15
The use of the family history method is recommended in family studies as a type of proxy interview of non-participating relatives. However, using different sources of information can result in bias as direct interviews may provide a higher likelihood of assigning diagnoses than family history reports. The aims of the present study were to: (1) compare diagnoses for threshold and subthreshold mood syndromes from interviews to those relying on information from relatives; (2) test the appropriateness of lowering the diagnostic threshold and combining multiple reports from the family history method to obtain comparable prevalence estimates to the interviews; (3) identify factors that influence the likelihood of agreement and reporting of disorders by informants. Within a family study, 1621 informant-index subject pairs were identified. DSM-5 diagnoses from direct interviews of index subjects were compared to those derived from family history information provided by their first-degree relatives. (1) Inter-informant agreement was acceptable for Mania, but low for all other mood syndromes. (2) Except for Mania and subthreshold depression, the family history method provided significantly lower prevalence estimates. The gap improved for all other syndromes after lowering the threshold of the family history method. (3) Individuals who had a history of depression themselves were more likely to report depression in their relatives. Low proportion of affected individuals for manic syndromes and lack of independence of data. The higher likelihood of reporting disorders by affected informants entails the risk of overestimation of the size of familial aggregation of depression. Copyright © 2014 Elsevier B.V. All rights reserved.
On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Li, Junran; Flagg, Cody; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne
2015-12-01
Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS-NIR, 350-2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400-700 nm) and the short-wavelength infrared (SWIR) area (1100-2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.
Macular Pigment Optical Density Measured by Heterochromatic Modulation Photometry
Huchzermeyer, Cord; Schlomberg, Juliane; Welge-Lüssen, Ulrich; Berendschot, Tos T. J. M.; Pokorny, Joel; Kremers, Jan
2014-01-01
Purpose To psychophysically determine macular pigment optical density (MPOD) employing the heterochromatic modulation photometry (HMP) paradigm by estimating 460 nm absorption at central and peripheral retinal locations. Methods For the HMP measurements, two lights (B: 460 nm and R: 660 nm) were presented in a test field and were modulated in counterphase at medium or high frequencies. The contrasts of the two lights were varied in tandem to determine flicker detection thresholds. Detection thresholds were measured for different R:B modulation ratios. The modulation ratio with minimal sensitivity (maximal threshold) is the point of equiluminance. Measurements were performed in 25 normal subjects (11 male, 14 female; age: 30±11 years, mean ± sd) using an eight channel LED stimulator with Maxwellian view optics. The results were compared with those from two published techniques – one based on heterochromatic flicker photometry (Macular Densitometer) and the other on fundus reflectometry (MPR). Results We were able to estimate MPOD with HMP using a modified theoretical model that was fitted to the HMP data. The resultant MPODHMP values correlated significantly with the MPODMPR values and with the MPODHFP values obtained at 0.25° and 0.5° retinal eccentricity. Conclusions HMP is a flicker-based method with measurements taken at a constant mean chromaticity and luminance. The data can be well fit by a model that allows all data points to contribute to the photometric equality estimate. Therefore, we think that HMP may be a useful method for MPOD measurements, in basic and clinical vision experiments. PMID:25354049
Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.
Ellner, Stephen P; Holmes, Elizabeth E
2008-08-01
We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.
A geographic analysis of population density thresholds in the influenza pandemic of 1918-19.
Chandra, Siddharth; Kassens-Noor, Eva; Kuljanin, Goran; Vertalka, Joshua
2013-02-20
Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918-19 in India, where over 15 million people died in the short span of less than one year. Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918-19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold.
A geographic analysis of population density thresholds in the influenza pandemic of 1918–19
2013-01-01
Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498
An adaptive threshold detector and channel parameter estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Arabshahi, P.; Mukai, R.; Yan, T. -Y.
2001-01-01
This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
A Progressive Black Top Hat Transformation Algorithm for Estimating Valley Volumes from DEM Data
NASA Astrophysics Data System (ADS)
Luo, W.; Pingel, T.; Heo, J.; Howard, A. D.
2013-12-01
The amount of valley incision and valley volume are important parameters in geomorphology and hydrology research, because they are related to the amount erosion (and thus the volume of sediments) and the amount of water needed to create the valley. This is not only the case for terrestrial research but also for planetary research as such figuring out how much water was on Mars. With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. However, previous studies typically use one single structuring element size for extracting the valley feature and one single threshold value for removing noise, resulting in some finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to separate above ground features and bare earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the structuring elements size is progressively increased to extract valleys of different orders. In addition, a slope based threshold was introduced to automatically adjust the threshold values for structuring elements with different sizes. Connectivity and shape parameters of the masked regions were used to keep the long linear valleys while removing other smaller non-connected regions. Preliminary application of the PBTH to Grand Canyon and two sites on Mars has produced promising results. More testing and fine-tuning is in progress. The ultimate goal of the project is to apply the algorithm to estimate the volume of valley networks on Mars and the volume of water needed to form the valleys we observe today and thus infer the nature of the hydrologic cycle on early Mars. The project is funded by NASA's Mars Data Analysis program.
Norwich, K H
2001-10-01
One can relate the saltiness of a solution of a given substance to the concentration of the solution by means of one of the well-known psychophysical laws. One can also compare the saltiness of solutions of different solutes which have the same concentration, since different substances are intrinsically more salty or less salty. We develop here an equation that relates saltiness both to the concentration of the substance (psychophysical) and to a distinguishing physical property of the salt (intrinsic). For a fixed standard molar entropy of the salt being tasted, the equation simplifies to Fechner's law. When one allows for the intrinsic 'noise' in the chemoreceptor, the equation generalizes to include Stevens's law, with corresponding decrease in the threshold for taste. This threshold reduction exemplifies the principle of stochastic resonance. The theory is validated with reference to experimental data.
Imaging shear strength along subduction faults
Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.
2017-01-01
Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.
Sazykina, Tatiana G; Kryshev, Alexander I
2016-12-01
Lower threshold dose rates and confidence limits are quantified for lifetime radiation effects in mammalian animals from internally deposited alpha-emitting radionuclides. Extensive datasets on effects from internal alpha-emitters are compiled from the International Radiobiological Archives. In total, the compiled database includes 257 records, which are analyzed by means of non-parametric order statistics. The generic lower threshold for alpha-emitters in mammalian animals (combined datasets) is 6.6·10 -5 Gy day -1 . Thresholds for individual alpha-emitting elements differ considerably: plutonium and americium - 2.0·10 -5 Gy day -1 ; radium - 2.1·10 -4 Gy day -1 . Threshold for chronic low-LET radiation is previously estimated at 1·10 -3 Gy day -1 . For low exposures, the following values of alpha radiation weighting factor w R for internally deposited alpha-emitters in mammals are quantified: w R (α) = 15 as a generic value for the whole group of alpha-emitters; w R (Pu) = 50 for plutonium; w R (Am) = 50 for americium; w R (Ra) = 5 for radium. These values are proposed to serve as radiation weighting factors in calculations of equivalent doses to non-human biota. The lower threshold dose rate for long-lived mammals (dogs) is significantly lower than comparing with the threshold for short-lived mammals (mice): 2.7·10 -5 Gy day -1 , and 2.0·10 -4 Gy day -1 , respectively. The difference in thresholds is exactly reflecting the relationship between the natural longevity of these two species. Graded scale of severity in lifetime radiation effects in mammals is developed, based on compiled datasets. Being placed on the severity scale, the effects of internal alpha-emitters are situated in the zones of considerably lower dose rates than effects of the same severity caused by low-LET radiation. RBE values, calculated for effects of equal severity, are found to depend on the intensity of chronic exposure: different RBE values are characteristic for low, moderate, and high lifetime exposures (30, 70, and 13, respectively). The results of the study provide a basis for selecting correct values of radiation weighting factors in dose assessment to non-human biota. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
William H. Cooke; Dennis M. Jacobs
2005-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....
Temporal development of extreme precipitation in Germany projected by EURO-CORDEX simulations
NASA Astrophysics Data System (ADS)
Brendel, Christoph; Deutschländer, Thomas
2017-04-01
A sustainable operation of transport infrastructure requires an enhanced resilience to the increasing impacts of climate change and related extreme meteorological events. To meet this challenge, the German Federal Ministry of Transport and Digital Infrastructure (BMVI) commenced a comprehensive national research program on safe and sustainable transport in Germany. A network of departmental research institutes addresses the "Adaptation of the German transport infrastructure towards climate change and extreme events". Various studies already have identified an increase in the average global precipitation for the 20th century. There is some indication that these increases are most visible in a rising frequency of precipitation extremes. However, the changes are highly variable between regions and seasons. With a further increase of atmospheric greenhouse gas concentrations in the 21st century, the likelihood of occurrence of such extreme events will continue to rise. A kernel estimator has been used in order to obtain a robust estimate of the temporal development of extreme precipitation events projected by an ensemble of EURO-CORDEX simulations. The kernel estimator measures the intensity of the poisson point process indicating temporal changes in the frequency of extreme events. Extreme precipitation events were selected using the peaks over threshold (POT) method with the 90th, 95th and 99th quantile of daily precipitation sums as thresholds. Application of this non-parametric approach with relative thresholds renders the use of a bias correction non-mandatory. In addition, in comparison to fitting an extreme value theory (EVT) distribution, the method is completely unsusceptible to outliers. First results show an overall increase of extreme precipitation events for Germany until the end of the 21st century. However, major differences between seasons, quantiles and the three different Representative Concentration Pathways (RCP 2.6, 4.5, and 8.5) have been identified. For instance, the frequency of extreme precipitation events more than triples in the most extreme scenario. Regional differences are rather small with the largest increase in northern Germany, particularly in coastal regions and the weakest increase in the most southern parts of Germany.
Petošić, Antonio; Horvat, Marko; Režek Jambrak, Anet
2017-11-01
The paper reports and compares the results of the electromechanical, acoustical and thermodynamical characterization of a low-frequency sonotrode-type ultrasonic device inside a small sonoreactor, immersed in three different loading media, namely, water, juice and milk, excited at different excitation levels, both below and above the cavitation threshold. The electroacoustic efficiency factor determined at system resonance through electromechanical characterization in degassed water as the reference medium is 88.7% for the device in question. This efficiency can be reduced up to three times due to the existence of a complex sound field in the reactor in linear driving conditions below the cavitation threshold. The behaviour of the system is more stable at higher excitation levels than in linear operating conditions. During acoustical characterization, acoustic pressure is spatially averaged, both below and above the cavitation threshold. The standing wave patterns inside the sonoreactor have a stronger influence on the variation of the spatially distributed RMS pressure in linear operating conditions. For these conditions, the variation of ±1.7dB was obtained, compared to ±1.4dB obtained in highly nonlinear regime. The acoustic power in the sonoreactor was estimated from the magnitude of the averaged RMS pressure, and from the reverberation time of the sonoreactor as the representation of the losses. The electroacoustic efficiency factors obtained through acoustical and electromechanical characterization are in a very good agreement at low excitation levels. The irradiated acoustic power estimated in nonlinear conditions differs from the dissipated acoustic power determined with the calorimetric method by several orders of magnitude. The number of negative pressure peaks that represent transient cavitation decreases over time during longer treatments of a medium with high-power ultrasound. The number of negative peaks decreases faster when the medium and the vessel are allowed to heat up. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques
NASA Astrophysics Data System (ADS)
Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.
2016-12-01
Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.
Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.
2014-01-01
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366
Poverty dynamics, poverty thresholds and mortality: An age-stage Markovian model
Rehkopf, David; Tuljapurkar, Shripad; Horvitz, Carol C.
2018-01-01
Recent studies have examined the risk of poverty throughout the life course, but few have considered how transitioning in and out of poverty shape the dynamic heterogeneity and mortality disparities of a cohort at each age. Here we use state-by-age modeling to capture individual heterogeneity in crossing one of three different poverty thresholds (defined as 1×, 2× or 3× the “official” poverty threshold) at each age. We examine age-specific state structure, the remaining life expectancy, its variance, and cohort simulations for those above and below each threshold. Survival and transitioning probabilities are statistically estimated by regression analyses of data from the Health and Retirement Survey RAND data-set, and the National Longitudinal Survey of Youth. Using the results of these regression analyses, we parameterize discrete state, discrete age matrix models. We found that individuals above all three thresholds have higher annual survival than those in poverty, especially for mid-ages to about age 80. The advantage is greatest when we classify individuals based on 1× the “official” poverty threshold. The greatest discrepancy in average remaining life expectancy and its variance between those above and in poverty occurs at mid-ages for all three thresholds. And fewer individuals are in poverty between ages 40-60 for all three thresholds. Our findings are consistent with results based on other data sets, but also suggest that dynamic heterogeneity in poverty and the transience of the poverty state is associated with income-related mortality disparities (less transience, especially of those above poverty, more disparities). This paper applies the approach of age-by-stage matrix models to human demography and individual poverty dynamics. In so doing we extend the literature on individual poverty dynamics across the life course. PMID:29768416
A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors
Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael
2015-01-01
The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830
Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.
Nesti, Alessandro; de Winkel, Ksander; Bülthoff, Heinrich H
2017-01-01
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.
Gender differences in developmental dyscalculia depend on diagnostic criteria
Devine, Amy; Soltész, Fruzsina; Nobes, Alison; Goswami, Usha; Szűcs, Dénes
2013-01-01
Developmental dyscalculia (DD) is a learning difficulty specific to mathematics learning. The prevalence of DD may be equivalent to that of dyslexia, posing an important challenge for effective educational provision. Nevertheless, there is no agreed definition of DD and there are controversies surrounding cutoff decisions, specificity and gender differences. In the current study, 1004 British primary school children completed mathematics and reading assessments. The prevalence of DD and gender ratio were estimated in this sample using different criteria. When using absolute thresholds, the prevalence of DD was the same for both genders regardless of the cutoff criteria applied, however gender differences emerged when using a mathematics-reading discrepancy definition. Correlations between mathematics performance and the control measures selected to identify a specific learning difficulty affect both prevalence estimates and whether a gender difference is in fact identified. Educational implications are discussed. PMID:27667904
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.
Gilmour, Heather
2016-11-16
Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.
Zemek, Allison; Garg, Rohit; Wong, Brian J. F.
2014-01-01
Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022
On the soft-gluon resummation in top quark pair production at hadron colliders
NASA Astrophysics Data System (ADS)
Czakon, M.; Mitov, A.
2009-09-01
We uncover a contribution to the NLO/NLL threshold resummed total cross section for top quark pair production at hadron colliders, which has not been taken into account in earlier literature. We derive this contribution - the difference between the singlet and octet hard (matching) coefficients - in exact analytic form. The numerical impact of our findings on the Sudakov resummed cross section turns out to be large, and comparable in size to the current estimates for the theoretical uncertainty of the total cross section. A rough estimate points toward a few percent decrease of the latter at the LHC.
NASA Technical Reports Server (NTRS)
Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.
2009-01-01
This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Dependence of cavitation, chemical effect, and mechanical effect thresholds on ultrasonic frequency.
Thanh Nguyen, Tam; Asakura, Yoshiyuki; Koda, Shinobu; Yasuda, Keiji
2017-11-01
Cavitation, chemical effect, and mechanical effect thresholds were investigated in wide frequency ranges from 22 to 4880kHz. Each threshold was measured in terms of sound pressure at fundamental frequency. Broadband noise emitted from acoustic cavitation bubbles was detected by a hydrophone to determine the cavitation threshold. Potassium iodide oxidation caused by acoustic cavitation was used to quantify the chemical effect threshold. The ultrasonic erosion of aluminum foil was conducted to estimate the mechanical effect threshold. The cavitation, chemical effect, and mechanical effect thresholds increased with increasing frequency. The chemical effect threshold was close to the cavitation threshold for all frequencies. At low frequency below 98kHz, the mechanical effect threshold was nearly equal to the cavitation threshold. However, the mechanical effect threshold was greatly higher than the cavitation threshold at high frequency. In addition, the thresholds of the second harmonic and the first ultraharmonic signals were measured to detect bubble occurrence. The threshold of the second harmonic approximated to the cavitation threshold below 1000kHz. On the other hand, the threshold of the first ultraharmonic was higher than the cavitation threshold below 98kHz and near to the cavitation threshold at high frequency. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Basiladze, S. G.
2017-05-01
The paper describes the general physical theory of signals, carriers of information, which supplements Shannon's abstract classical theory and is applicable in much broader fields, including nuclear physics. It is shown that in the absence of classical noise its place should be taken by the physical threshold of signal perception for objects of both macrocosm and microcosm. The signal perception threshold allows the presence of subthreshold (virtual) signal states. For these states, Boolean algebra of logic ( A = 0/1) is transformed into the "algebraic logic" of probabilities (0 ≤ a ≤ 1). The similarity and difference of virtual states of macroand microsignals are elucidated. "Real" and "quantum" information for computers is considered briefly. The maximum information transmission rate is estimated based on physical constants.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-08-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-01-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Deep convolutional neural network for mammographic density segmentation
NASA Astrophysics Data System (ADS)
Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.
2018-02-01
Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p < 0.0001) by two-tailed paired t-test. This study demonstrated that the DCNN approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.
Physician and patient willingness to pay for electronic cardiovascular disease management.
Deal, Ken; Keshavjee, Karim; Troyan, Sue; Kyba, Robert; Holbrook, Anne Marie
2014-07-01
Cardiovascular disease (CVD) is an important target for electronic decision support. We examined the potential sustainability of an electronic CVD management program using a discrete choice experiment (DCE). Our objective was to estimate physician and patient willingness-to-pay (WTP) for the current and enhanced programs. Focus groups, expert input and literature searches decided the attributes to be evaluated for the physician and patient DCEs, which were carried out using a Web-based program. Hierarchical Bayes analysis estimated preference coefficients for each respondent and latent class analysis segmented each sample. Simulations were used to estimate WTP for each of the attributes individually and for an enhanced vascular management system. 144 participants (70 physicians, 74 patients) completed the DCE. Overall, access speed to updated records and monthly payments for a nurse coordinator were the main determinants of physician choices. Two distinctly different segments of physicians were identified - one very sensitive to monthly subscription fee and speed of updating the tracker with new patient data and the other very sensitive to the monthly cost of the nurse coordinator and government billing incentives. Patient choices were most significantly influenced by the yearly subscription cost. The estimated physician WTP was slightly above the estimated threshold for sustainability while the patient WTP was below. Current willingness to pay for electronic cardiovascular disease management should encourage innovation to provide economies of scale in program development, delivery and maintenance to meet sustainability thresholds. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Melchert, O; Katzgraber, Helmut G; Novotny, M A
2016-04-01
We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.
Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E
2015-03-01
Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.
Satellite rainfall retrieval by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.
1986-01-01
The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.
Hearing threshold shifts and recovery after noise exposure in beluga whales, Delphinapterus leucas.
Popov, Vladimir V; Supin, Alexander Ya; Rozhnov, Viatcheslav V; Nechaev, Dmitry I; Sysuyeva, Evgenia V; Klishin, Vladimir O; Pletenko, Mikhail G; Tarakanov, Mikhail B
2013-05-01
Temporary threshold shift (TTS) after loud noise exposure was investigated in a male and a female beluga whale (Delphinapterus leucas). The thresholds were evaluated using the evoked-potential technique, which allowed for threshold tracing with a resolution of ~1 min. The fatiguing noise had a 0.5 octave bandwidth, with center frequencies ranging from 11.2 to 90 kHz, a level of 165 dB re. 1 μPa and exposure durations from 1 to 30 min. The effects of the noise were tested at probe frequencies ranging from -0.5 to +1.5 octaves relative to the noise center frequency. The effect was estimated in terms of both immediate (1.5 min) post-exposure TTS and recovery duration. The highest TTS with the longest recovery duration was produced by noises of lower frequencies (11.2 and 22.5 kHz) and appeared at a test frequency of +0.5 octave. At higher noise frequencies (45 and 90 kHz), the TTS decreased. The TTS effect gradually increased with prolonged exposures ranging from 1 to 30 min. There was a considerable TTS difference between the two subjects.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Impact of rainfall spatial variability on Flash Flood Forecasting
NASA Astrophysics Data System (ADS)
Douinot, Audrey; Roux, Hélène; Garambois, Pierre-André; Larnier, Kevin
2014-05-01
According to the United States National Hazard Statistics database, flooding and flash flooding have caused the largest number of deaths of any weather-related phenomenon over the last 30 years (Flash Flood Guidance Improvement Team, 2003). Like the storms that cause them, flash floods are very variable and non-linear phenomena in time and space, with the result that understanding and anticipating flash flood genesis is far from straightforward. In the U.S., the Flash Flood Guidance (FFG) estimates the average number of inches of rainfall for given durations required to produce flash flooding in the indicated county. In Europe, flash flood often occurred on small catchments (approximately 100 km2) and it has been shown that the spatial variability of rainfall has a great impact on the catchment response (Le Lay and Saulnier, 2007). Therefore, in this study, based on the Flash flood Guidance method, rainfall spatial variability information is introduced in the threshold estimation. As for FFG, the threshold is the number of millimeters of rainfall required to produce a discharge higher than the discharge corresponding to the first level (yellow) warning of the French flood warning service (SCHAPI: Service Central d'Hydrométéorologie et d'Appui à la Prévision des Inondations). The indexes δ1 and δ2 of Zoccatelli et al. (2010), based on the spatial moments of catchment rainfall, are used to characterize the rainfall spatial distribution. Rainfall spatial variability impacts on warning threshold and on hydrological processes are then studied. The spatially distributed hydrological model MARINE (Roux et al., 2011), dedicated to flash flood prediction is forced with synthetic rainfall patterns of different spatial distributions. This allows the determination of a warning threshold diagram: knowing the spatial distribution of the rainfall forecast and therefore the 2 indexes δ1 and δ2, the threshold value is read on the diagram. A warning threshold diagram is built for each studied catchment. The proposed methodology is applied on three Mediterranean catchments often submitted to flash floods. The new forecasting method as well as the Flash Flood Guidance method (uniform rainfall threshold) are tested on 25 flash floods events that had occurred on those catchments. Results show a significant impact of rainfall spatial variability. Indeed, it appears that the uniform rainfall threshold (FFG threshold) always overestimates the observed rainfall threshold. The difference between the FFG threshold and the proposed threshold ranges from 8% to 30%. The proposed methodology allows the calculation of a threshold more representative of the observed one. However, results strongly depend on the related event duration and on the catchment properties. For instance, the impact of the rainfall spatial variability seems to be correlated with the catchment size. According to these results, it seems to be interesting to introduce information on the catchment properties in the threshold calculation. Flash Flood Guidance Improvement Team, 2003. River Forecast Center (RFC) Development Management Team. Final Report. Office of Hydrologic Development (OHD), Silver Spring, Mary-land. Le Lay, M. and Saulnier, G.-M., 2007. Exploring the signature of climate and landscape spatial variabilities in flash flood events: Case of the 8-9 September 2002 Cévennes-Vivarais catastrophic event. Geophysical Research Letters, 34(L13401), doi:10.1029/2007GL029746. Roux, H., Labat, D., Garambois, P.-A., Maubourguet, M.-M., Chorda, J. and Dartus, D., 2011. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments. Nat. Hazards Earth Syst. Sci. J1 - NHESS, 11(9), 2567-2582. Zoccatelli, D., Borga, M., Zanon, F., Antonescu, B. and Stancalie, G., 2010. Which rainfall spatial information for flash flood response modelling? A numerical investigation based on data from the Carpathian range, Romania. Journal of Hydrology, 394(1-2), 148-161.
Theoretical studies of the potential surface for the F - H2 greater than HF + H reaction
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Walch, Stephen, P.; Langhoff, Stephen R.; Taylor, Peter R.; Jaffe, Richard L.
1987-01-01
The F + H2 yields HF + H potential energy hypersurface was studied in the saddle point and entrance channel regions. Using a large (5s 5p 3d 2f 1g/4s 3p 2d) atomic natural orbital basis set, a classical barrier height of 1.86 kcal/mole was obtained at the CASSCF/multireference CI level (MRCI) after correcting for basis set superposition error and including a Davidson correction (+Q) for higher excitations. Based upon an analysis of the computed results, the true classical barrier is estimated to be about 1.4 kcal/mole. The location of the bottleneck on the lowest vibrationally adiabatic potential curve was also computed and the translational energy threshold determined from a one-dimensional tunneling calculation. Using the difference between the calculated and experimental threshold to adjust the classical barrier height on the computed surface yields a classical barrier in the range of 1.0 to 1.5 kcal/mole. Combining the results of the direct estimates of the classical barrier height with the empirical values obtained from the approximation calculations of the dynamical threshold, it is predicted that the true classical barrier height is 1.4 + or - 0.4 kcal/mole. Arguments are presented in favor of including the relatively large +Q correction obtained when nine electrons are correlated at the CASSCF/MRCI level.
Automated Detection of Clouds in Satellite Imagery
NASA Technical Reports Server (NTRS)
Jedlovec, Gary
2010-01-01
Many different approaches have been used to automatically detect clouds in satellite imagery. Most approaches are deterministic and provide a binary cloud - no cloud product used in a variety of applications. Some of these applications require the identification of cloudy pixels for cloud parameter retrieval, while others require only an ability to mask out clouds for the retrieval of surface or atmospheric parameters in the absence of clouds. A few approaches estimate a probability of the presence of a cloud at each point in an image. These probabilities allow a user to select cloud information based on the tolerance of the application to uncertainty in the estimate. Many automated cloud detection techniques develop sophisticated tests using a combination of visible and infrared channels to determine the presence of clouds in both day and night imagery. Visible channels are quite effective in detecting clouds during the day, as long as test thresholds properly account for variations in surface features and atmospheric scattering. Cloud detection at night is more challenging, since only courser resolution infrared measurements are available. A few schemes use just two infrared channels for day and night cloud detection. The most influential factor in the success of a particular technique is the determination of the thresholds for each cloud test. The techniques which perform the best usually have thresholds that are varied based on the geographic region, time of year, time of day and solar angle.
Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex
2016-01-01
Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561
Complex Variation in Measures of General Intelligence and Cognitive Change
Rowe, Suzanne J.; Rowlatt, Amy; Davies, Gail; Harris, Sarah E.; Porteous, David J.; Liewald, David C.; McNeill, Geraldine; Starr, John M.
2013-01-01
Combining information from multiple SNPs may capture a greater amount of genetic variation than from the sum of individual SNP effects and help identifying missing heritability. Regions may capture variation from multiple common variants of small effect, multiple rare variants or a combination of both. We describe regional heritability mapping of human cognition. Measures of crystallised (gc) and fluid intelligence (gf) in late adulthood (64–79 years) were available for 1806 individuals genotyped for 549,692 autosomal single nucleotide polymorphisms (SNPs). The same individuals were tested at age 11, enabling us the rare opportunity to measure cognitive change across most of their lifespan. 547,750 SNPs ranked by position are divided into 10, 908 overlapping regions of 101 SNPs to estimate the genetic variance each region explains, an approach that resembles classical linkage methods. We also estimate the genetic variation explained by individual autosomes and by SNPs within genes. Empirical significance thresholds are estimated separately for each trait from whole genome scans of 500 permutated data sets. The 5% significance threshold for the likelihood ratio test of a single region ranged from 17–17.5 for the three traits. This is the equivalent to nominal significance under the expectation of a chi-squared distribution (between 1df and 0) of P<1.44×10−5. These thresholds indicate that the distribution of the likelihood ratio test from this type of variance component analysis should be estimated empirically. Furthermore, we show that estimates of variation explained by these regions can be grossly overestimated. After applying permutation thresholds, a region for gf on chromosome 5 spanning the PRRC1 gene is significant at a genome-wide 10% empirical threshold. Analysis of gene methylation on the temporal cortex provides support for the association of PRRC1 and fluid intelligence (P = 0.004), and provides a prime candidate gene for high throughput sequencing of these uniquely informative cohorts. PMID:24349040
NASA Technical Reports Server (NTRS)
Temkin, A.
1984-01-01
Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr
A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Holubar, Marisa; Stavroulakis, Maria Christina; Maldonado, Yvonne; Ioannidis, John P A; Contopoulos-Ioannidis, Despina
2017-01-01
Inclusion of vaccine herd-protection effects in cost-effectiveness analyses (CEAs) can impact the CEAs-conclusions. However, empirical epidemiologic data on the size of herd-protection effects from original studies are limited. We performed a quantitative comparative analysis of the impact of herd-protection effects in CEAs for four childhood vaccinations (pneumococcal, meningococcal, rotavirus and influenza). We considered CEAs reporting incremental-cost-effectiveness-ratios (ICERs) (per quality-adjusted-life-years [QALY] gained; per life-years [LY] gained or per disability-adjusted-life-years [DALY] avoided), both with and without herd protection, while keeping all other model parameters stable. We calculated the size of the ICER-differences without vs with-herd-protection and estimated how often inclusion of herd-protection led to crossing of the cost-effectiveness threshold (of an assumed societal-willingness-to-pay) of $50,000 for more-developed countries or X3GDP/capita (WHO-threshold) for less-developed countries. We identified 35 CEA studies (20 pneumococcal, 4 meningococcal, 8 rotavirus and 3 influenza vaccines) with 99 ICER-analyses (55 per-QALY, 27 per-LY and 17 per-DALY). The median ICER-absolute differences per QALY, LY and DALY (without minus with herd-protection) were $15,620 (IQR: $877 to $48,376); $54,871 (IQR: $787 to $115,026) and $49 (IQR: $15 to $1,636) respectively. When the target-vaccination strategy was not cost-saving without herd-protection, inclusion of herd-protection always resulted in more favorable results. In CEAs that had ICERs above the cost-effectiveness threshold without herd-protection, inclusion of herd-protection led to crossing of that threshold in 45% of the cases. This impacted only CEAs for more developed countries, as all but one CEAs for less developed countries had ICERs below the WHO-cost-effectiveness threshold even without herd-protection. In several analyses, recommendation for the adoption of the target vaccination strategy depended on the inclusion of the herd protection effect. Inclusion of herd-protection effects in CEAs had a substantial impact in the estimated ICERs and made target-vaccination strategies more attractive options in almost half of the cases where ICERs were above the societal-willingness to pay threshold without herd-protection. More empirical epidemiologic data are needed to determine the size of herd-protection effects across diverse settings and also the size of negative vaccine effects, e.g. from serotype substitution.
Maldonado, Yvonne; Ioannidis, John P. A.; Contopoulos-Ioannidis, Despina
2017-01-01
Background Inclusion of vaccine herd-protection effects in cost-effectiveness analyses (CEAs) can impact the CEAs-conclusions. However, empirical epidemiologic data on the size of herd-protection effects from original studies are limited. Methods We performed a quantitative comparative analysis of the impact of herd-protection effects in CEAs for four childhood vaccinations (pneumococcal, meningococcal, rotavirus and influenza). We considered CEAs reporting incremental-cost-effectiveness-ratios (ICERs) (per quality-adjusted-life-years [QALY] gained; per life-years [LY] gained or per disability-adjusted-life-years [DALY] avoided), both with and without herd protection, while keeping all other model parameters stable. We calculated the size of the ICER-differences without vs with-herd-protection and estimated how often inclusion of herd-protection led to crossing of the cost-effectiveness threshold (of an assumed societal-willingness-to-pay) of $50,000 for more-developed countries or X3GDP/capita (WHO-threshold) for less-developed countries. Results We identified 35 CEA studies (20 pneumococcal, 4 meningococcal, 8 rotavirus and 3 influenza vaccines) with 99 ICER-analyses (55 per-QALY, 27 per-LY and 17 per-DALY). The median ICER-absolute differences per QALY, LY and DALY (without minus with herd-protection) were $15,620 (IQR: $877 to $48,376); $54,871 (IQR: $787 to $115,026) and $49 (IQR: $15 to $1,636) respectively. When the target-vaccination strategy was not cost-saving without herd-protection, inclusion of herd-protection always resulted in more favorable results. In CEAs that had ICERs above the cost-effectiveness threshold without herd-protection, inclusion of herd-protection led to crossing of that threshold in 45% of the cases. This impacted only CEAs for more developed countries, as all but one CEAs for less developed countries had ICERs below the WHO-cost-effectiveness threshold even without herd-protection. In several analyses, recommendation for the adoption of the target vaccination strategy depended on the inclusion of the herd protection effect. Conclusions Inclusion of herd-protection effects in CEAs had a substantial impact in the estimated ICERs and made target-vaccination strategies more attractive options in almost half of the cases where ICERs were above the societal-willingness to pay threshold without herd-protection. More empirical epidemiologic data are needed to determine the size of herd-protection effects across diverse settings and also the size of negative vaccine effects, e.g. from serotype substitution. PMID:28249046
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Koyama, Shuji; Yabe, Takuya; Komori, Masataka; Tada, Junki; Ito, Shiori; Toshito, Toshiyuki; Hirata, Yuho; Watanabe, Kenichi
2018-03-01
Luminescence of water during irradiations of proton-beams or X-ray photons lower energy than the Cerenkov-light threshold is promising for range estimation or the distribution measurements of beams. However it is not yet obvious whether the intensities and distributions are stable with the water conditions such as temperature or addition of solvable materials. It remains also unclear whether the luminescence of water linearly increases with the irradiated proton or X-ray energies. Consequently we measured the luminescence of water during irradiations of proton-beam or X-ray photons lower energy than the Cerenkov-light threshold with different water conditions and energies to evaluate the stability and linearity of luminescence of water. We placed a water phantom set with a proton therapy or X-ray system, luminescence images of water with different conditions and energies were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton or X-ray irradiations to the water phantom. In the stability measurements, imaging was made for different temperatures of water and addition of inorganic and organic materials to water. In the linearity measurements for the proton, we irradiated with four different energies below Cerenkov light threshold. In the linearity measurements for the X-ray, we irradiated X-ray with different supplied voltages. We evaluated the depth profiles for the luminescence images and evaluated the light intensities and distributions. The results showed that the luminescence of water was quite stable with the water conditions. There were no significant changes of intensities and distributions with the different temperatures. Results from the linearity experiments showed that the luminescence of water linearly increased with their energies. We confirmed that luminescence of water is stable with conditions of water. We also confirmed that the luminescence of water linearly increased with their energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
This paper investigates the quantitative relationship of ionizing radiation to the occurrence of posterior lenticular opacities among the survivors of the atomic bombings of Hiroshima and Nagasaki suggested by the DS86 dosimetry system. DS86 doses are available for 1983 (93.4%) of the 2124 atomic bomb survivors analyzed in 1982. The DS86 kerma neutron component for Hiroshima survivors is much smaller than its comparable T65DR component, but still 4.2-fold higher (0.38 Gy at 6 Gy) than that in Nagasaki (0.09 Gy at 6 Gy). Thus, if the eye is especially sensitive to neutrons, there may yet be some useful information onmore » their effects, particularly in Hiroshima. The dose-response relationship has been evaluated as a function of the separately estimated gamma-ray and neutron doses. Among several different dose-response models without and with two thresholds, we have selected as the best model the one with the smallest x2 or the largest log likelihood value associated with the goodness of fit. The best fit is a linear gamma-linear neutron relationship which assumes different thresholds for the two types of radiation. Both gamma and neutron regression coefficients for the best fitting model are positive and highly significant for the estimated DS86 eye organ dose.« less
Digital camera auto white balance based on color temperature estimation clustering
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong
2010-11-01
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-07-06
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Zhao, Fei-Li; Yue, Ming; Yang, Hua; Wang, Tian; Wu, Jiu-Hong; Li, Shu-Chuen
2011-03-01
To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) ratio with the stated preference data and compare the results obtained between chronic prostatitis (CP) patients and general population (GP). WTP per QALY was calculated with the subjects' own health-related utility and the WTP value. Two widely used preference-based health-related quality of life instruments, EuroQol (EQ-5D) and Short Form 6D (SF-6D), were used to elicit utility for participants' own health. The monthly WTP values for moving from participants' current health to a perfect health were elicited using closed-ended iterative bidding contingent valuation method. A total of 268 CP patients and 364 participants from GP completed the questionnaire. We obtained 4 WTP/QALY ratios ranging from $4700 to $7400, which is close to the lower bound of local gross domestic product per capita, a threshold proposed by World Health Organization. Nevertheless, these values were lower than other proposed thresholds and published empirical researches on diseases with mortality risk. Furthermore, the WTP/QALY ratios from the GP were significantly lower than those from the CP patients, and different determinants were associated with the within group variation identified by multiple linear regression. Preference elicitation methods are acceptable and feasible in the socio-cultural context of an Asian environment and the calculation of WTP/QALY ratio produced meaningful answers. The necessity of considering the QALY type or disease-specific QALY in estimating WTP/QALY ratio was highlighted and 1 to 3 times of gross domestic product/capita recommended by World Health Organization could potentially serve as a benchmark for threshold in this Asian context.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Mechanisms of breathing instability in patients with obstructive sleep apnea.
Younes, Magdy; Ostrowski, Michele; Atkar, Raj; Laprairie, John; Siemens, Andrea; Hanly, Patrick
2007-12-01
The response to chemical stimuli (chemical responsiveness) and the increases in respiratory drive required for arousal (arousal threshold) and for opening the airway without arousal (effective recruitment threshold) are important determinants of ventilatory instability and, hence, severity of obstructive apnea. We measured these variables in 21 obstructive apnea patients (apnea-hypopnea index 91 +/- 24 h(-1)) while on continuous-positive-airway pressure. During sleep, pressure was intermittently reduced (dial down) to induce severe hypopneas. Dial downs were done on room air and following approximately 30 s of breathing hypercapneic and/or hypoxic mixtures, which induced a range of ventilatory stimulation before dial down. Ventilation just before dial down and flow during dial down were measured. Chemical responsiveness, estimated as the percent increase in ventilation during the 5(th) breath following administration of 6% CO(2) combined with approximately 4% desaturation, was large (187 +/- 117%). Arousal threshold, estimated as the percent increase in ventilation associated with a 50% probability of arousal, ranged from 40% to >268% and was <120% in 12/21 patients, indicating that in many patients arousal occurs with modest changes in chemical drive. Effective recruitment threshold, estimated as percent increase in pre-dial-down ventilation associated with a significant increase in dial-down flow, ranged from zero to >174% and was <110% in 12/21 patients, indicating that in many patients reflex dilatation occurs with modest increases in drive. The two thresholds were not correlated. In most OSA patients, airway patency may be maintained with only modest increases in chemical drive, but instability results because of a low arousal threshold and a brisk increase in drive following brief reduction in alveolar ventilation.
Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross
2012-01-01
Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173
Henry, Kenneth S; Amburgey, Kassidy N; Abrams, Kristina S; Idrobo, Fabio; Carney, Laurel H
2017-10-01
Vowels are complex sounds with four to five spectral peaks known as formants. The frequencies of the two lowest formants, F1and F2, are sufficient for vowel discrimination. Behavioral studies show that many birds and mammals can discriminate vowels. However, few studies have quantified thresholds for formant-frequency discrimination. The present study examined formant-frequency discrimination in budgerigars (Melopsittacus undulatus) and humans using stimuli with one or two formants and a constant fundamental frequency of 200 Hz. Stimuli had spectral envelopes similar to natural speech and were presented with random level variation. Thresholds were estimated for frequency discrimination of F1, F2, and simultaneous F1 and F2 changes. The same two-down, one-up tracking procedure and single-interval, two-alternative task were used for both species. Formant-frequency discrimination thresholds were as sensitive in budgerigars as in humans and followed the same patterns across all conditions. Thresholds expressed as percent frequency difference were higher for F1 than for F2, and were unchanged between stimuli with one or two formants. Thresholds for simultaneous F1 and F2 changes indicated that discrimination was based on combined information from both formant regions. Results were consistent with previous human studies and show that budgerigars provide an exceptionally sensitive animal model of vowel feature discrimination.
Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection
NASA Astrophysics Data System (ADS)
Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis
2016-04-01
In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
Schüller, L K; Burfeind, O; Heuwieser, W
2014-05-01
The objectives of this retrospective study were to investigate the relationship between temperature-humidity index (THI) and conception rate (CR) of lactating dairy cows, to estimate a threshold for this relationship, and to identify periods of exposure to heat stress relative to breeding in an area of moderate climate. In addition, we compared three different heat load indices related to CR: mean THI, maximum THI, and number of hours above the mean THI threshold. The THI threshold for the influence of heat stress on CR was 73. It was statistically chosen based on the observed relationship between the mean THI at the day of breeding and the resulting CR. Negative effects of heat stress, however, were already apparent at lower levels of THI, and 1 hour of mean THI of 73 or more decreased the CR significantly. The CR of lactating dairy cows was negatively affected by heat stress both before and after the day of breeding. The greatest negative impact of heat stress on CR was observed 21 to 1 day before breeding. When the mean THI was 73 or more in this period, CR decreased from 31% to 12%. Compared with the average maximum THI and the total number of hours above a threshold of more than or 9 hours, the mean THI was the most sensitive heat load index relating to CR. These results indicate that the CR of dairy cows raised in the moderate climates is highly affected by heat stress. Copyright © 2014 Elsevier Inc. All rights reserved.
On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy
Li, Junran; Flagg, Cody B.; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne
2015-01-01
Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS–NIR, 350–2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400–700 nm) and the short-wavelength infrared (SWIR) area (1100–2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.
Using Low Levels of Stochastic Vestibular Stimulation to Improve Balance Function
Goel, Rahul; Kofman, Igor; Jeevarajan, Jerome; De Dios, Yiri; Cohen, Helen S.; Bloomberg, Jacob J.; Mulavara, Ajitkumar P.
2015-01-01
Low-level stochastic vestibular stimulation (SVS) has been associated with improved postural responses in the medio-lateral (ML) direction, but its effect in improving balance function in both the ML and anterior-posterior (AP) directions has not been studied. In this series of studies, the efficacy of applying low amplitude SVS in 0–30 Hz range between the mastoids in the ML direction on improving cross-planar balance function was investigated. Forty-five (45) subjects stood on a compliant surface with their eyes closed and were instructed to maintain a stable upright stance. Measures of stability of the head, trunk, and whole body were quantified in ML, AP and combined APML directions. Results show that binaural bipolar SVS given in the ML direction significantly improved balance performance with the peak of optimal stimulus amplitude predominantly in the range of 100–500 μA for all the three directions, exhibiting stochastic resonance (SR) phenomenon. Objective perceptual and body motion thresholds as estimates of internal noise while subjects sat on a chair with their eyes closed and were given 1 Hz bipolar binaural sinusoidal electrical stimuli were also measured. In general, there was no significant difference between estimates of perceptual and body motion thresholds. The average optimal SVS amplitude that improved balance performance (peak SVS amplitude normalized to perceptual threshold) was estimated to be 46% in ML, 53% in AP, and 50% in APML directions. A miniature patch-type SVS device may be useful to improve balance function in people with disabilities due to aging, Parkinson’s disease or in astronauts returning from long-duration space flight. PMID:26295807
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Higher sensitivity to sweet and salty taste in obese compared to lean individuals.
Hardikar, Samyogita; Höchenberger, Richard; Villringer, Arno; Ohla, Kathrin
2017-04-01
Although putatively taste has been associated with obesity as one of the factors governing food intake, previous studies have failed to find a consistent link between taste perception and Body Mass Index (BMI). A comprehensive comparison of both thresholds and hedonics for four basic taste modalities (sweet, salty, sour, and bitter) has only been carried out with a very small sample size in adults. In the present exploratory study, we compared 23 obese (OB; BMI > 30), and 31 lean (LN; BMI < 25) individuals on three dimensions of taste perception - recognition thresholds, intensity, and pleasantness - using different concentrations of sucrose (sweet), sodium chloride (NaCl; salty), citric acid (sour), and quinine hydrochloride (bitter) dissolved in water. Recognition thresholds were estimated with an adaptive Bayesian staircase procedure (QUEST). Intensity and pleasantness ratings were acquired using visual analogue scales (VAS). It was found that OB had lower thresholds than LN for sucrose and NaCl, indicating a higher sensitivity to sweet and salty tastes. This effect was also reflected in ratings of intensity, which were significantly higher in the OB group for the lower concentrations of sweet, salty, and sour. Calculation of Bayes factors further corroborated the differences observed with null-hypothesis significance testing (NHST). Overall, the results suggest that OB are more sensitive to sweet and salty, and perceive sweet, salty, and sour more intensely than LN. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Vibration sensory thresholds depend on pressure of applied stimulus.
Lowenthal, L M; Hockaday, T D
1987-01-01
Vibration sensory thresholds (VSTs) were estimated in 40 healthy subjects and 8 with diabetic peripheral neuropathy. A vibrameter and a biothesiometer were used at four sites and at differing pressures. In normal subjects, with the vibrameter at 200 g, mean VST +/- SE for all sites was 1.87 micron +/- 0.22 and at 400 g dropped to 1.08 micron +/- 0.15 (P less than .0001). In 20 of these subjects with a biothesiometer at 200 and 400 g, mean VST fell from 12.8 +/- 1.5 to 11.1 +/- 1.1 (arbitrary units) (P = .01) when the greater pressure was applied. In the 8 subjects with peripheral neuropathy, with the vibrameter at 200 and 400 g, respectively, mean VST fell from 70.7 +/- 26 to 7.2 +/- 1.8. VST in these subjects was estimated again after 1 mo and showed strong correlations with the previous values. Biothesiometer results correlated with vibrameter results at all sites. Thus, VST decreases as the pressure of the applied stimulus is increased and this effect appears to be more marked in peripheral neuropathy. This has important consequences in monitoring this condition.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien
2016-01-01
This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544
Performance of ultrasound fetal weight estimation in twins.
Dimassi, Kaouther; Karoui, Abir; Triki, Amel; Gara, Mohamed Faouzi
2016-03-01
Ultrasonography is an essential tool in the management of twin pregnancies. Fetal weight estimation is useful to anticipate neonatal care in case of weight restriction or growth discordance. To assess the accuracy of estimated fetal weight (EFW) in twins and to assess the accuracy of sonographic examination to predict birth weight discordance (BWD) and small birth weight (SBW). Methods : This was a longitudinal prospective study over a period of one year. We have included 50 twin pregnancies with a first trimester ultrasound calculated term and specified chorionicity. An ultrasound EFW was scheduled for all patients within an interval of 4 days before delivery. We calculated the differences between EFW and BW in terms of absolute difference and percentage error. We studied the correlation and the agreement between EFW and BW. Finally we calculated the sensitivity, the specificity, PPV and NPV of ultrasound in the diagnosis of BWD and SBW. Absolute differences between BWF and BW were similar for the two twins. The relative difference was 7.7% [0-32] for T1 and 8.2% [0-27] for T2. The margin of error was greater than 10% in 38% of the cases for T1 and in 34% of cases for T2. Furthermore, correlation coefficients R1 and R2 for T1 and T2 were close to 1; R 1 =0.87 and R 2 = 0.89. Linear regression analysis allowed us to calculate the birth weight based on the estimated weight and this according to the following equations: For the first twin BW T1 = 0.846 * EFW 415,57+ T1 For the second twin BW T2 = 65.68 + 0.963 * EFW T2 in 34% of cases for T2. Chorionicity, presentation and gestational age did not affect the estimations. Ultrasonography in the diagnosis of SBW had a sensitivity of 90.32%, a specificity of 76.82%, a (PPV) of 80% and a (VPN) of 87%. The performance of ultrasound in the diagnosis of BWD varied according to the adopted threshold. Ultrasound is an effective examination to estimate twins weight. Regarding prenatal diagnosis of birth weight discordance, the relevance of this examination increases with the adopted threshold.
Sensitivity of neurons in the middle temporal area of marmoset monkeys to random dot motion.
Chaplin, Tristan A; Allitt, Benjamin J; Hagan, Maureen A; Price, Nicholas S C; Rajan, Ramesh; Rosa, Marcello G P; Lui, Leo L
2017-09-01
Neurons in the middle temporal area (MT) of the primate cerebral cortex respond to moving visual stimuli. The sensitivity of MT neurons to motion signals can be characterized by using random-dot stimuli, in which the strength of the motion signal is manipulated by adding different levels of noise (elements that move in random directions). In macaques, this has allowed the calculation of "neurometric" thresholds. We characterized the responses of MT neurons in sufentanil/nitrous oxide-anesthetized marmoset monkeys, a species that has attracted considerable recent interest as an animal model for vision research. We found that MT neurons show a wide range of neurometric thresholds and that the responses of the most sensitive neurons could account for the behavioral performance of macaques and humans. We also investigated factors that contributed to the wide range of observed thresholds. The difference in firing rate between responses to motion in the preferred and null directions was the most effective predictor of neurometric threshold, whereas the direction tuning bandwidth had no correlation with the threshold. We also showed that it is possible to obtain reliable estimates of neurometric thresholds using stimuli that were not highly optimized for each neuron, as is often necessary when recording from large populations of neurons with different receptive field concurrently, as was the case in this study. These results demonstrate that marmoset MT shows an essential physiological similarity to macaque MT and suggest that its neurons are capable of representing motion signals that allow for comparable motion-in-noise judgments. NEW & NOTEWORTHY We report the activity of neurons in marmoset MT in response to random-dot motion stimuli of varying coherence. The information carried by individual MT neurons was comparable to that of the macaque, and the maximum firing rates were a strong predictor of sensitivity. Our study provides key information regarding the neural basis of motion perception in the marmoset, a small primate species that is becoming increasingly popular as an experimental model. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Threshold corrections to the bottom quark mass revisited
Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart
2015-05-19
Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less
Uncertainties in extreme surge level estimates from observational records.
van den Brink, H W; Können, G P; Opsteegh, J D
2005-06-15
Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Estimating soil moisture exceedance probability from antecedent rainfall
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.
2016-12-01
The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.
Frątczak-Łagiewska, Katarzyna; Matuszewski, Szymon
2018-05-01
Differences in size between males and females, called the sexual size dimorphism, are common in insects. These differences may be followed by differences in the duration of development. Accordingly, it is believed that insect sex may be used to increase the accuracy of insect age estimates in forensic entomology. Here, the sex-specific differences in the development of Creophilus maxillosus were studied at seven constant temperatures. We have also created separate developmental models for males and females of C. maxillosus and tested them in a validation study to answer a question whether sex-specific developmental models improve the accuracy of insect age estimates. Results demonstrate that males of C. maxillosus developed significantly longer than females. The sex-specific and general models for the total immature development had the same optimal temperature range and similar developmental threshold but different thermal constant K, which was the largest in the case of the male-specific model and the smallest in the case of the female-specific model. Despite these differences, validation study revealed just minimal and statistically insignificant differences in the accuracy of age estimates using sex-specific and general thermal summation models. This finding indicates that in spite of statistically significant differences in the duration of immature development between females and males of C. maxillosus, there is no increase in the accuracy of insect age estimates while using the sex-specific thermal summation models compared to the general model. Accordingly, this study does not support the use of sex-specific developmental data for the estimation of insect age in forensic entomology.
NASA Astrophysics Data System (ADS)
Katzensteiner, H.; Bell, R.; Petschko, H.; Glade, T.
2012-04-01
The prediction and forecast of widespread landsliding for a given triggering event is an open research question. Numerous studies tried to link spatial rainfall and landslide distributions. This study focuses on analysing the relationship between intensive precipitation and rainfall-triggered shallow landslides in the year 2009 in Lower Austria. Landslide distributions were gained from the building ground register, which is maintained by the Geological Survey of Lower Austria. It contains detailed information of landslides, which were registered due to damage reports. Spatially distributed rainfall estimates were extracted from INCA (Integrated Nowcasting through Comprehensive Analysis) precipitation analysis, which is a combination of station data interpolation and radar data in a spatial resolution of 1km developed by the Central Institute for Meteorology and Geodynamics (ZAMG), Vienna, Austria. The importance of the data source is shown by comparing rainfall data based on reference gauges, spatial interpolation and INCA-analysis for a certain storm period. INCA precipitation data can detect precipitating cells that do not hit a station but might trigger a landslide, which is an advantage over the application of reference stations for the definition of rainfall thresholds. Empirical thresholds at regional scale were determined based on rainfall-intensity and duration in the year 2009 and landslide information. These thresholds are dependent on the criteria which separate the landslide triggering and non-triggering precipitation events from each other. Different approaches for defining thresholds alter the shape of the threshold as well. A temporarily threshold I=8,8263*D^(-0.672) for extreme rainfall events in summer in Lower Austria was defined. A verification of the threshold with similar events of other years as well as following analyses based on a larger landslide database are in progress.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Denoising forced-choice detection data.
García-Pérez, Miguel A
2010-02-01
Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.
Temporal resolution in children.
Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D
1989-06-01
The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.
Giblin, Shawn M.; Houser, Jeffrey N.; Sullivan, John F.; Langrehr, H.A.; Rogala, James T.; Campbell, Benjamin D.
2014-01-01
Duckweed and other free-floating plants (FFP) can form dense surface mats that affect ecosystem condition and processes, and can impair public use of aquatic resources. FFP obtain their nutrients from the water column, and the formation of dense FFP mats can be a consequence and indicator of river eutrophication. We conducted two complementary surveys of diverse aquatic areas of the Upper Mississippi River as an in situ approach for estimating thresholds in the response of FFP abundance to nutrient concentration and physical conditions in a large, floodplain river. Local regression analysis was used to estimate thresholds in the relations between FFP abundance and phosphorus (P) concentration (0.167 mg l−1L), nitrogen (N) concentration (0.808 mg l−1), water velocity (0.095 m s−1), and aquatic macrophyte abundance (65 % cover). FFP tissue concentrations suggested P limitation was more likely in spring, N limitation was more likely in late summer, and N limitation was most likely in backwaters with minimal hydraulic connection to the channel. The thresholds estimated here, along with observed patterns in nutrient limitation, provide river scientists and managers with criteria to consider when attempting to modify FFP abundance in off-channel areas of large river systems.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology.
Insel, Philip S; Mattsson, Niklas; Mackin, R Scott; Schöll, Michael; Nosheny, Rachel L; Tosun, Duygu; Donohue, Michael C; Aisen, Paul S; Jagust, William J; Weiner, Michael W
2016-05-17
To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. © 2016 American Academy of Neurology.
Modeling of digital mammograms using bicubic spline functions and additive noise
NASA Astrophysics Data System (ADS)
Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne
1998-09-01
The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Mattsson, Niklas; Mackin, R. Scott; Schöll, Michael; Nosheny, Rachel L.; Tosun, Duygu; Donohue, Michael C.; Aisen, Paul S.; Jagust, William J.; Weiner, Michael W.
2016-01-01
Objective: To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. PMID:27164667
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott; ...
2016-04-15
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Coevolution of Glauber-like Ising dynamics and topology
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Fortunato, Santo; Castellano, Claudio
2009-11-01
We study the coevolution of a generalized Glauber dynamics for Ising spins with tunable threshold and of the graph topology where the dynamics takes place. This simple coevolution dynamics generates a rich phase diagram in the space of the two parameters of the model, the threshold and the rewiring probability. The diagram displays phase transitions of different types: spin ordering, percolation, and connectedness. At variance with traditional coevolution models, in which all spins of each connected component of the graph have equal value in the stationary state, we find that, for suitable choices of the parameters, the system may converge to a state in which spins of opposite sign coexist in the same component organized in compact clusters of like-signed spins. Mean field calculations enable one to estimate some features of the phase diagram.
Threshold concentration in the nonlinear absorbance law.
Tolbin, Alexander Yu; Pushkarev, Victor E; Tomilova, Larisa G; Zefirov, Nikolay S
2017-05-24
A new nonlinear relationship of the absorption coefficient with the concentration was proposed, allowing the calculation of the threshold concentration, which shows that there is a deviation from the Beer-Lambert law. The nonlinear model was successfully tested on a stable dimeric phthalocyanine ligand of J-type in solvents with different polarity. It was shown that deviation from the linearity is connected with a specific association of the macrocyclic molecules, which, in the case of non-polar solvents, leads to the formation of H-aggregates composed of J-type dimeric molecules. The aggregation number was estimated to be less than 1.5, which has allowed us to conduct a series of analytical experiments in a wide range of concentrations (1 × 10 -6 -5 × 10 -4 mol L -1 ).
Flood frequency analysis - the challenge of using historical data
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn
2015-04-01
Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.
Heil, Peter; Matysiak, Artur; Neubauer, Heinrich
2017-09-01
Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, Yolanda D., E-mail: ydtseng@partners.org; Krishnan, Monica S.; Sullivan, Adam J.
2013-11-01
Purpose: We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations. Methods and Materials: A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate whenmore » within the 95% confidence interval of median survival estimates from an established prognostic model. Results: Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%-97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model. Conclusions: Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed.« less
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia
2017-04-01
Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1
Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.
2014-01-01
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458
Blackwood, Christopher B; Hudleston, Deborah; Zak, Donald R; Buyer, Jeffrey S
2007-08-01
Ecological diversity indices are frequently applied to molecular profiling methods, such as terminal restriction fragment length polymorphism (T-RFLP), in order to compare diversity among microbial communities. We performed simulations to determine whether diversity indices calculated from T-RFLP profiles could reflect the true diversity of the underlying communities despite potential analytical artifacts. These include multiple taxa generating the same terminal restriction fragment (TRF) and rare TRFs being excluded by a relative abundance (fluorescence) threshold. True community diversity was simulated using the lognormal species abundance distribution. Simulated T-RFLP profiles were generated by assigning each species a TRF size based on an empirical or modeled TRF size distribution. With a typical threshold (1%), the only consistently useful relationship was between Smith and Wilson evenness applied to T-RFLP data (TRF-E(var)) and true Shannon diversity (H'), with correlations between 0.71 and 0.81. TRF-H' and true H' were well correlated in the simulations using the lowest number of species, but this correlation declined substantially in simulations using greater numbers of species, to the point where TRF-H' cannot be considered a useful statistic. The relationships between TRF diversity indices and true indices were sensitive to the relative abundance threshold, with greatly improved correlations observed using a 0.1% threshold, which was investigated for comparative purposes but is not possible to consistently achieve with current technology. In general, the use of diversity indices on T-RFLP data provides inaccurate estimates of true diversity in microbial communities (with the possible exception of TRF-E(var)). We suggest that, where significant differences in T-RFLP diversity indices were found in previous work, these should be reinterpreted as a reflection of differences in community composition rather than a true difference in community diversity.
Paungmali, Aatit; Joseph, Leonard H; Sitilertpisan, Patraporn; Pirunsan, Ubon; Uthaikhup, Sureeporn
2017-11-01
Lumbopelvic stabilization training (LPST) may provide therapeutic benefits on pain modulation in chronic nonspecific low back pain conditions. This study aimed to examine the effects of LPST on pain threshold and pain intensity in comparison with the passive automated cycling intervention and control intervention among patients with chronic nonspecific low back pain. A within-subject, repeated-measures, crossover randomized controlled design was conducted among 25 participants (7 males and 18 females) with chronic nonspecific low back pain. All the participants received 3 different types of experimental interventions, which included LPST, the passive automated cycling intervention, and the control intervention randomly, with 48 hours between the sessions. The pressure pain threshold (PPT), hot-cold pain threshold, and pain intensity were estimated before and after the interventions. Repeated-measures analysis of variance showed that LPST provided therapeutic effects as it improved the PPT beyond the placebo and control interventions (P < 0.01). The pain intensity under the LPST condition was significantly better than that under the passive automated cycling intervention and controlled intervention (P < 0.001). Heat pain threshold under the LPST condition also showed a significant trend of improvement beyond the control (P < 0.05), but no significant effects on cold pain threshold were evident. Lumbopelvic stabilization training may provide therapeutic effects by inducing pain modulation through an improvement in the pain threshold and reduction in pain intensity. LPST may be considered as part of the management programs for treatment of chronic low back pain. © 2017 World Institute of Pain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krzywinski, Jacek; Conley, Raymond; Moeller, Stefan
The Linac Coherent Light Source is upgrading its machine to high repetition rate and to extended ranges. Novel coatings, with limited surface oxidation, which are able to work at the carbon edge, are required. In addition, high-resolution soft X-ray monochromators become necessary. One of the big challenges is to design the mirror geometry and the grating profile to have high reflectivity (or efficiency) and at the same time survive the high peak energy of the free-electron laser pulses. For these reasons the experimental damage threshold, at 900 eV, of two platinum-coated gratings with different blazed angles has been investigated. The gratingsmore » were tested at 1° grazing incidence. To validate a model for which the damage threshold on the blaze grating can be estimated by calculating the damage threshold of a mirror with an angle of incidence identical to the angle of incidence on the grating plus the blaze angle, tests on Pt-coated substrates have also been performed. The results confirmed the prediction. Uncoated silicon, platinum and SiB 3 (both deposited on a silicon substrate) were also investigated. In general, the measured damage threshold at grazing incidence is higher than that calculated under the assumption that there is no energy transport from the volume where the photons are absorbed. However, it was found that, for the case of the SiB 3 coating, the grazing incidence condition did not increase the damage threshold, indicating that the energy transport away from the extinction volume is negligible.« less
Krzywinski, Jacek; Conley, Raymond; Moeller, Stefan; ...
2018-01-01
The Linac Coherent Light Source is upgrading its machine to high repetition rate and to extended ranges. Novel coatings, with limited surface oxidation, which are able to work at the carbon edge, are required. In addition, high-resolution soft X-ray monochromators become necessary. One of the big challenges is to design the mirror geometry and the grating profile to have high reflectivity (or efficiency) and at the same time survive the high peak energy of the free-electron laser pulses. For these reasons the experimental damage threshold, at 900 eV, of two platinum-coated gratings with different blazed angles has been investigated. The gratingsmore » were tested at 1° grazing incidence. To validate a model for which the damage threshold on the blaze grating can be estimated by calculating the damage threshold of a mirror with an angle of incidence identical to the angle of incidence on the grating plus the blaze angle, tests on Pt-coated substrates have also been performed. The results confirmed the prediction. Uncoated silicon, platinum and SiB 3 (both deposited on a silicon substrate) were also investigated. In general, the measured damage threshold at grazing incidence is higher than that calculated under the assumption that there is no energy transport from the volume where the photons are absorbed. However, it was found that, for the case of the SiB 3 coating, the grazing incidence condition did not increase the damage threshold, indicating that the energy transport away from the extinction volume is negligible.« less
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Derivation of critical rainfall thresholds for landslide in Sicily
NASA Astrophysics Data System (ADS)
Caracciolo, Domenico; Arnone, Elisa; Noto, Leonardo V.
2015-04-01
Rainfall is the primary trigger of shallow landslides that can cause fatalities, damage to properties and economic losses in many areas of the world. For this reason, determining the rainfall amount/intensity responsible for landslide occurrence is important, and may contribute to mitigate the related risk and save lives. Efforts have been made in different countries to investigate triggering conditions in order to define landslide-triggering rainfall thresholds. The rainfall thresholds are generally described by a functional relationship of power in terms of cumulated or intensity event rainfall-duration, whose parameters are estimated empirically from the analysis of historical rainfall events that triggered landslides. The aim of this paper is the derivation of critical rainfall thresholds for landslide occurrence in Sicily, southern Italy, by focusing particularly on the role of the antecedent wet conditions. The creation of the appropriate landslide-rainfall database likely represents one of main efforts in this type of analysis. For this work, historical landslide events occurred in Sicily from 1919 to 2001 were selected from the archive of the Sistema Informativo sulle Catastrofi Idrogeologiche, developed under the project Aree Vulnerabili Italiane. The corresponding triggering precipitations were screened from the raingauges network in Sicily, maintained by the Osservatorio delle Acque - Agenzia Regionale per i Rifiuti e le Acque. In particular, a detailed analysis was carried out to identify and reconstruct the hourly rainfall events that caused the selected landslides. A bootstrapping statistical technique has been used to determine the uncertainties associated with the threshold parameters. The rainfall thresholds at different exceedance probability levels, from 1% to 10%, were defined in terms of cumulated event rainfall, E, and rainfall duration, D. The role of rainfall prior to the damaging events was taken into account by including in the analysis the rainfall fallen 6, 15 and 30 days before each landslide. The antecedent rainfall turned out to be particularly important in triggering landslides. The rainfall thresholds obtained for the Sicily were compared with the regional curves proposed by various authors confirming a good agreement with these.
Wang, Zhi-Tao; W.L. Au, Whitlow; Rendell, Luke; Wu, Hai-Ping; Wu, Yu-Ping; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang
2016-01-01
Background. Knowledge of species-specific vocalization characteristics and their associated active communication space, the effective range over which a communication signal can be detected by a conspecific, is critical for understanding the impacts of underwater acoustic pollution, as well as other threats. Methods. We used a two-dimensional cross-shaped hydrophone array system to record the whistles of free-ranging Indo-Pacific humpback dolphins (Sousa chinensis) in shallow-water environments of the Pearl River Estuary (PRE) and Beibu Gulf (BG), China. Using hyperbolic position fixing, which exploits time differences of arrival of a signal between pairs of hydrophone receivers, we obtained source location estimates for whistles with good signal-to-noise ratio (SNR ≥10 dB) and not polluted by other sounds and back-calculated their apparent source levels (ASL). Combining with the masking levels (including simultaneous noise levels, masking tonal threshold, and the Sousa auditory threshold) and the custom made site-specific sound propagation models, we further estimated their active communication space (ACS). Results. Humpback dolphins produced whistles with average root-mean-square ASL of 138.5 ± 6.8 (mean ± standard deviation) and 137.2 ± 7.0 dB re 1 µPa in PRE (N = 33) and BG (N = 209), respectively. We found statistically significant differences in ASLs among different whistle contour types. The mean and maximum ACS of whistles were estimated to be 14.7 ± 2.6 (median ± quartile deviation) and 17.1± 3.5 m in PRE, and 34.2 ± 9.5 and 43.5 ± 12.2 m in BG. Using just the auditory threshold as the masking level produced the mean and maximum ACSat of 24.3 ± 4.8 and 35.7 ± 4.6 m for PRE, and 60.7 ± 18.1 and 74.3 ± 25.3 m for BG. The small ACSs were due to the high ambient noise level. Significant differences in ACSs were also observed among different whistle contour types. Discussion. Besides shedding some light for evaluating appropriate noise exposure levels and information for the regulation of underwater acoustic pollution, these baseline data can also be used for aiding the passive acoustic monitoring of dolphin populations, defining the boundaries of separate groups in a more biologically meaningful way during field surveys, and guiding the appropriate approach distance for local dolphin-watching boats and research boat during focal group following. PMID:26893973
Tusell, L; David, I; Bodin, L; Legarra, A; Rafel, O; López-Bejar, M; Piles, M
2011-12-01
Animals under environmental thermal stress conditions have reduced fertility due to impairment of some mechanisms involved in their reproductive performance that are different in males and females. As a consequence, the most sensitive periods of time and the magnitude of effect of temperature on fertility can differ between sexes. The objective of this study was to estimate separately the effect of temperature in different periods around the insemination time on male and on female fertility by using the product threshold model. This model assumes that an observed reproduction outcome is the result of the product of 2 unobserved variables corresponding to the unobserved fertilities of the 2 individuals involved in the mating. A total of 7,625 AI records from rabbits belonging to a line selected for growth rate and indoor daily temperature records were used. The average maximum daily temperature and the proportion of days in which the maximum temperature was greater than 25°C were used as temperature descriptors. These descriptors were calculated for several periods around the day of AI. In the case of males, 4 periods of time covered different stages of the spermatogenesis, the transit through the epididymus of the sperm, and the day of AI. For females, 5 periods of time covered the phases of preovulatory follicular maturation including day of AI and ovulation, fertilization and peri-implantational stage of the embryos, embryonic and early fetal periods of gestation, and finally, late gestation until birth. The effect of the different temperature descriptors was estimated in the corresponding male and female liabilities in a set of threshold product models. The temperature of the day of AI seems to be the most relevant temperature descriptor affecting male fertility because greater temperature records on the day of AI caused a decrease in male fertility (-6% in male fertility rate with respect to thermoneutrality). Departures from the thermal zone in temperature descriptors covering several periods before AI until early gestation had a negative effect on female fertility, with the pre- and peri-implantational period of the embryos being especially sensitive (from -5 to -6% in female fertility rate with respect to thermoneutrality). The latest period of gestation was unaffected by the temperature. Overall, magnitude and persistency of the temperatures reached in the conditions of this study do not seem to be great enough to have a large effect on male and female rabbit fertility.
Wang, Zhi-Tao; W L Au, Whitlow; Rendell, Luke; Wang, Ke-Xiong; Wu, Hai-Ping; Wu, Yu-Ping; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding
2016-01-01
Background. Knowledge of species-specific vocalization characteristics and their associated active communication space, the effective range over which a communication signal can be detected by a conspecific, is critical for understanding the impacts of underwater acoustic pollution, as well as other threats. Methods. We used a two-dimensional cross-shaped hydrophone array system to record the whistles of free-ranging Indo-Pacific humpback dolphins (Sousa chinensis) in shallow-water environments of the Pearl River Estuary (PRE) and Beibu Gulf (BG), China. Using hyperbolic position fixing, which exploits time differences of arrival of a signal between pairs of hydrophone receivers, we obtained source location estimates for whistles with good signal-to-noise ratio (SNR ≥10 dB) and not polluted by other sounds and back-calculated their apparent source levels (ASL). Combining with the masking levels (including simultaneous noise levels, masking tonal threshold, and the Sousa auditory threshold) and the custom made site-specific sound propagation models, we further estimated their active communication space (ACS). Results. Humpback dolphins produced whistles with average root-mean-square ASL of 138.5 ± 6.8 (mean ± standard deviation) and 137.2 ± 7.0 dB re 1 µPa in PRE (N = 33) and BG (N = 209), respectively. We found statistically significant differences in ASLs among different whistle contour types. The mean and maximum ACS of whistles were estimated to be 14.7 ± 2.6 (median ± quartile deviation) and 17.1± 3.5 m in PRE, and 34.2 ± 9.5 and 43.5 ± 12.2 m in BG. Using just the auditory threshold as the masking level produced the mean and maximum ACSat of 24.3 ± 4.8 and 35.7 ± 4.6 m for PRE, and 60.7 ± 18.1 and 74.3 ± 25.3 m for BG. The small ACSs were due to the high ambient noise level. Significant differences in ACSs were also observed among different whistle contour types. Discussion. Besides shedding some light for evaluating appropriate noise exposure levels and information for the regulation of underwater acoustic pollution, these baseline data can also be used for aiding the passive acoustic monitoring of dolphin populations, defining the boundaries of separate groups in a more biologically meaningful way during field surveys, and guiding the appropriate approach distance for local dolphin-watching boats and research boat during focal group following.
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
A simple method to estimate threshold friction velocity of wind erosion in the field
USDA-ARS?s Scientific Manuscript database
Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...
NASA Astrophysics Data System (ADS)
Nogueira, P.; Zankl, M.; Schlattl, H.; Vaz, P.
2011-11-01
The radiation-induced posterior subcapsular cataract has long been generally accepted to be a deterministic effect that does not occur at doses below a threshold of at least 2 Gy. Recent epidemiological studies indicate that the threshold for cataract induction may be much lower or that there may be no threshold at all. A thorough study of this subject requires more accurate dose estimates for the eye lens than those available in ICRP Publication 74. Eye lens absorbed dose per unit fluence conversion coefficients for electron irradiation were calculated using a geometrical model of the eye that takes into account different cell populations of the lens epithelium, together with the MCNPX Monte Carlo radiation transport code package. For the cell population most sensitive to ionizing radiation—the germinative cells—absorbed dose per unit fluence conversion coefficients were determined that are up to a factor of 4.8 higher than the mean eye lens absorbed dose conversion coefficients for electron energies below 2 MeV. Comparison of the results with previously published values for a slightly different eye model showed generally good agreement for all electron energies. Finally, the influence of individual anatomical variability was quantified by positioning the lens at various depths below the cornea. A depth difference of 2 mm between the shallowest and the deepest location of the germinative zone can lead to a difference between the resulting absorbed doses of up to nearly a factor of 5000 for electron energy of 0.7 MeV.
Nogueira, P; Zankl, M; Schlattl, H; Vaz, P
2011-11-07
The radiation-induced posterior subcapsular cataract has long been generally accepted to be a deterministic effect that does not occur at doses below a threshold of at least 2 Gy. Recent epidemiological studies indicate that the threshold for cataract induction may be much lower or that there may be no threshold at all. A thorough study of this subject requires more accurate dose estimates for the eye lens than those available in ICRP Publication 74. Eye lens absorbed dose per unit fluence conversion coefficients for electron irradiation were calculated using a geometrical model of the eye that takes into account different cell populations of the lens epithelium, together with the MCNPX Monte Carlo radiation transport code package. For the cell population most sensitive to ionizing radiation-the germinative cells-absorbed dose per unit fluence conversion coefficients were determined that are up to a factor of 4.8 higher than the mean eye lens absorbed dose conversion coefficients for electron energies below 2 MeV. Comparison of the results with previously published values for a slightly different eye model showed generally good agreement for all electron energies. Finally, the influence of individual anatomical variability was quantified by positioning the lens at various depths below the cornea. A depth difference of 2 mm between the shallowest and the deepest location of the germinative zone can lead to a difference between the resulting absorbed doses of up to nearly a factor of 5000 for electron energy of 0.7 MeV.
Nimdet, Khachapon; Chaiyakunapruk, Nathorn; Vichansavakul, Kittaya; Ngorsuraches, Surachat
2015-01-01
A number of studies have been conducted to estimate willingness to pay (WTP) per quality-adjusted life years (QALY) in patients or general population for various diseases. However, there has not been any systematic review summarizing the relationship between WTP per QALY and cost-effectiveness (CE) threshold based on World Health Organization (WHO) recommendation. To systematically review willingness-to-pay per quality-adjusted-life-year (WTP per QALY) literature, to compare WTP per QALY with Cost-effectiveness (CE) threshold recommended by WHO, and to determine potential influencing factors. We searched MEDLINE, EMBASE, Psyinfo, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Center of Research Dissemination (CRD), and EconLit from inception through 15 July 2014. To be included, studies have to estimate WTP per QALY in health-related issues using stated preference method. Two investigators independently reviewed each abstract, completed full-text reviews, and extracted information for included studies. We compared WTP per QALY to GDP per capita, analyzed, and summarized potential influencing factors. Out of 3,914 articles founded, 14 studies were included. Most studies (92.85%) used contingent valuation method, while only one study used discrete choice experiments. Sample size varied from 104 to 21,896 persons. The ratio between WTP per QALY and GDP per capita varied widely from 0.05 to 5.40, depending on scenario outcomes (e.g., whether it extended/saved life or improved quality of life), severity of hypothetical scenarios, duration of scenario, and source of funding. The average ratio of WTP per QALY and GDP per capita for extending life or saving life (2.03) was significantly higher than the average for improving quality of life (0.59) with the mean difference of 1.43 (95% CI, 1.81 to 1.06). This systematic review provides an overview summary of all studies estimating WTP per QALY studies. The variation of ratio of WTP per QALY and GDP per capita depended on several factors may prompt discussions on the CE threshold policy. Our research work provides a foundation for defining future direction of decision criteria for an evidence-informed decision making system.
Lindahl, Jonas; Danell, Rickard
The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty into risk when we are choosing decision thresholds in bibliometricly informed decision making. The significance of our results are discussed from the point of view of a science policy and management.
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Rhebergen, Koenraad S; van Esch, Thamar E M; Dreschler, Wouter A
2015-06-01
A temporal resolution test in addition to the pure-tone audiogram may be of great clinical interest because of its relevance in speech perception and expected relevance in hearing aid fitting. Larsby and Arlinger developed an appropriate clinical test, but this test uses a Békèsy-tracking procedure for estimating masked thresholds in stationary and interrupted noise to assess release of masking (RoM) for temporal resolution. Generally the Hughson-Westlake up-down procedure is used in the clinic to measure the pure-tone thresholds in quiet. A uniform approach will facilitate clinical application and might be appropriate for RoM measurements as well. Because there is no golden standard for measuring the RoM in the clinic, we examine in the present study the Hughson-Westlake up-down procedure to measure the RoM and compare the results with the Békèsy-tracking procedure. The purpose of the current study was to examine the differences between a Békèsy-tracking procedure and the Hughson-Westlake up-down procedure for estimating masked thresholds in stationary and interrupted noise to assess RoM. RoM is assessed in eight normal-hearing (NH) and ten hearing-impaired (HI) listeners through both methods. Results from both methods are compared with each other and with predicted thresholds from a model. Wilcoxon signed-rank tests, paired t tests. Some differences between the two methods were found. We used a model to quantify the results of the two measurement procedures. The results of the Hughson-Westlake procedure were clearly better in agreement with the model than the results of the Békèsy-tracking procedure. Furthermore, the Békèsy-tracking procedure showed more spread in the results of the NH listeners than the Hughson-Westlake procedure. The Hughson-Westlake procedure seems to be an applicable alternative for measuring RoM for temporal resolution in the clinical audiological practice. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.
2015-06-01
Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.