Sample records for objective threshold estimation

  1. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  2. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  3. Rare event computation in deterministic chaotic systems using genealogical particle analysis

    NASA Astrophysics Data System (ADS)

    Wouters, J.; Bouchet, F.

    2016-09-01

    In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.

  4. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  5. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  6. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  7. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott

    Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less

  8. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    DOE PAGES

    Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott; ...

    2016-04-15

    Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less

  9. Object tracking algorithm based on the color histogram probability distribution

    NASA Astrophysics Data System (ADS)

    Li, Ning; Lu, Tongwei; Zhang, Yanduo

    2018-04-01

    In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.

  10. Sensitivity and specificity of auditory steady‐state response testing

    PubMed Central

    Rabelo, Camila Maia; Schochat, Eliane

    2011-01-01

    INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442

  11. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  12. An Objective Estimation of Air-Bone-Gap in Cochlear Implant Recipients with Residual Hearing Using Electrocochleography.

    PubMed

    Koka, Kanthaiah; Saoji, Aniket A; Attias, Joseph; Litvak, Leonid M

    2017-01-01

    Although, cochlear implants (CI) traditionally have been used to treat individuals with bilateral profound sensorineural hearing loss, a recent trend is to implant individuals with residual low-frequency hearing. Notably, many of these individuals demonstrate an air-bone gap (ABG) in low-frequency, pure-tone thresholds following implantation. An ABG is the difference between audiometric thresholds measured using air conduction (AC) and bone conduction (BC) stimulation. Although, behavioral AC thresholds are straightforward to assess, BC thresholds can be difficult to measure in individuals with severe-to-profound hearing loss because of vibrotactile responses to high-level, low-frequency stimulation and the potential contribution of hearing in the contralateral ear. Because of these technical barriers to measuring behavioral BC thresholds in implanted patients with residual hearing, it would be helpful to have an objective method for determining ABG. This study evaluated an innovative technique for measuring electrocochleographic (ECochG) responses using the cochlear microphonic (CM) response to assess AC and BC thresholds in implanted patients with residual hearing. Results showed high correlations between CM thresholds and behavioral audiograms for AC and BC conditions, thereby demonstrating the feasibility of using ECochG as an objective tool for quantifying ABG in CI recipients.

  13. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  14. Improvements in estimating proportions of objects from multispectral data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Hyde, P. D.; Richardson, W.

    1974-01-01

    Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.

  15. Temperature Thresholds and Thermal Requirements for the Development of the Rice Leaf Folder, Cnaphalocrocis medinalis

    PubMed Central

    Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.

    2013-01-01

    The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891

  16. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears

    USGS Publications Warehouse

    Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  17. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears.

    PubMed

    Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  18. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  19. The hockey-stick method to estimate evening dim light melatonin onset (DLMO) in humans.

    PubMed

    Danilenko, Konstantin V; Verevkin, Evgeniy G; Antyufeev, Viktor S; Wirz-Justice, Anna; Cajochen, Christian

    2014-04-01

    The onset of melatonin secretion in the evening is the most reliable and most widely used index of circadian timing in humans. Saliva (or plasma) is usually sampled every 0.5-1 hours under dim-light conditions in the evening 5-6 hours before usual bedtime to assess the dim-light melatonin onset (DLMO). For many years, attempts have been made to find a reliable objective determination of melatonin onset time either by fixed or dynamic threshold approaches. The here-developed hockey-stick algorithm, used as an interactive computer-based approach, fits the evening melatonin profile by a piecewise linear-parabolic function represented as a straight line switching to the branch of a parabola. The switch point is considered to reliably estimate melatonin rise time. We applied the hockey-stick method to 109 half-hourly melatonin profiles to assess the DLMOs and compared these estimates to visual ratings from three experts in the field. The DLMOs of 103 profiles were considered to be clearly quantifiable. The hockey-stick DLMO estimates were on average 4 minutes earlier than the experts' estimates, with a range of -27 to +13 minutes; in 47% of the cases the difference fell within ±5 minutes, in 98% within -20 to +13 minutes. The raters' and hockey-stick estimates showed poor accordance with DLMOs defined by threshold methods. Thus, the hockey-stick algorithm is a reliable objective method to estimate melatonin rise time, which does not depend on a threshold value and is free from errors arising from differences in subjective circadian phase estimates. The method is available as a computerized program that can be easily used in research settings and clinical practice either for salivary or plasma melatonin values.

  20. Vascular and nerve damage in workers exposed to vibrating tools. The importance of objective measurements of exposure time.

    PubMed

    Gerhardsson, Lars; Balogh, Istvan; Hambert, Per-Arne; Hjortsberg, Ulf; Karlsson, Jan-Erik

    2005-01-01

    The aim of the present study was to compare the development of vibration white fingers (VWF) in workers in relation to different ways of exposure estimation, and their relationship to the standard ISO 5349, annex A. Nineteen vibration exposed (grinding machines) male workers completed a questionnaire followed by a structured interview including questions regarding their estimated hand-held vibration exposure. Neurophysiological tests such as fractionated nerve conduction velocity in hands and arms, vibrotactile perception thresholds and temperature thresholds were determined. The subjective estimation of the mean daily exposure-time to vibrating tools was 192 min (range 18-480 min) among the workers. The estimated mean exposure time calculated from the consumption of grinding wheels was 42 min (range 18-60 min), approximately a four-fold overestimation (Wilcoxon's signed ranks test, p<0.001). Thus, objective measurements of the exposure time, related to the standard ISO 5349, which in this case were based on the consumption of grinding wheels, will in most cases give a better basis for adequate risk assessment than self-exposure assessment.

  1. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  2. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472

  3. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  4. Effect of skin-transmitted vibration enhancement on vibrotactile perception.

    PubMed

    Tanaka, Yoshihiro; Ueda, Yuichiro; Sano, Akihito

    2015-06-01

    Vibration on skin elicited by the mechanical interaction of touch between the skin and an object propagates to skin far from the point of contact. This paper investigates the effect of skin-transmitted vibration on vibrotactile perception. To enhance the transmission of high-frequency vibration on the skin, stiff tape was attached to the skin so that the tape covered the bottom surface of the index finger from the periphery of the distal interphalangeal joint to the metacarpophalangeal joint. Two psychophysical experiments with high-frequency vibrotactile stimuli of 250 Hz were conducted. In the psychophysical experiments, discrimination and detection thresholds were estimated and compared between conditions of the presence or the absence of the tape (normal bare finger). A method of limits was applied for the detection threshold estimation, and the discrimination task using a reference stimulus and six test stimuli with different amplitudes was applied for the discrimination threshold estimation. The stimulation was given to bare fingertips of participants. Result showed that the detection threshold was enhanced by attaching the tape, and the discrimination threshold enhancement by attaching the tape was confirmed for participants who have relatively large discrimination threshold under normal bare finger. Then, skin-transmitted vibration was measured with an accelerometer with the psychophysical experiments. Result showed that the skin-transmitted vibration when the tape was attached to the skin was larger than that when normal bare skin. There is a correlation between the increase in skin-transmitted vibration and the enhancement of the discrimination threshold.

  5. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  6. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  7. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  8. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  9. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.

  10. Influence of drug load on dissolution behavior of tablets containing a poorly water-soluble drug: estimation of the percolation threshold.

    PubMed

    Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó

    2017-08-01

    Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.

  11. Hypo- and hyperglycemia in relation to the mean, standard deviation, coefficient of variation, and nature of the glucose distribution.

    PubMed

    Rodbard, David

    2012-10-01

    We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.

  12. A Non-Invasive Bladder Sensory Test Supports a Role for Dysmenorrhea Increasing Bladder Noxious Mechanosensitivity

    PubMed Central

    TU, Frank F.; EPSTEIN, Aliza E.; POZOLO, Kristen E.; SEXTON, Debra L.; MELNYK, Alexandra I.; HELLMAN, Kevin M.

    2012-01-01

    Objective Catheterization to measure bladder sensitivity is aversive and hinders human participation in visceral sensory research. Therefore, we sought to characterize the reliability of sonographically-estimated female bladder sensory thresholds. To demonstrate this technique’s usefulness, we examined the effects of self-reported dysmenorrhea on bladder pain thresholds. Methods Bladder sensory threshold volumes were determined during provoked natural diuresis in 49 healthy women (mean age 24 ± 8) using three-dimensional ultrasound. Cystometric thresholds (Vfs – first sensation, Vfu – first urge, Vmt – maximum tolerance) were quantified and related to bladder urgency and pain. We estimated reliability (one-week retest and interrater). Self-reported menstrual pain was examined in relationship to bladder pain, urgency and volume thresholds. Results Average bladder sensory thresholds (mLs) were Vfs (160±100), Vfu (310±130), and Vmt (500±180). Interrater reliability ranged from 0.97–0.99. One-week retest reliability was Vmt = 0.76 (95% CI 0.64–0.88), Vfs = 0.62 (95% CI 0.44–0.80), and Vfu = 0.63, (95% CI 0.47–0.80). Bladder filling rate correlated with all thresholds (r = 0.53–0.64, p < 0.0001). Women with moderate to severe dysmenorrhea pain had increased bladder pain and urgency at Vfs and increased pain at Vfu (p’s < 0.05). In contrast, dysmenorrhea pain was unrelated to bladder capacity. Discussion Sonographic estimates of bladder sensory thresholds were reproducible and reliable. In these healthy volunteers, dysmenorrhea was associated with increased bladder pain and urgency during filling but unrelated to capacity. Plausibly, dysmenorrhea sufferers may exhibit enhanced visceral mechanosensitivity, increasing their risk to develop chronic bladder pain syndromes. PMID:23370073

  13. Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng

    The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.

  14. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    PubMed Central

    Mattsson, Niklas; Mackin, R. Scott; Schöll, Michael; Nosheny, Rachel L.; Tosun, Duygu; Donohue, Michael C.; Aisen, Paul S.; Jagust, William J.; Weiner, Michael W.

    2016-01-01

    Objective: To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. PMID:27164667

  15. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  16. The correlation dimension: a useful objective measure of the transient visual evoked potential?

    PubMed

    Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J

    2008-01-14

    Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.

  17. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  18. Uncertainties in extreme surge level estimates from observational records.

    PubMed

    van den Brink, H W; Können, G P; Opsteegh, J D

    2005-06-15

    Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.

  19. Genetic parameters for hoof health traits estimated with linear and threshold models using alternative cohorts.

    PubMed

    Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F

    2017-04-01

    A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding values for hoof lesion resistance corresponded to sires with a higher prevalence of healthy daughters. The rank correlations between estimated breeding values ranged from 0.96 to 0.99 when predicted using either one of the 2 cohorts and from 0.94 to 0.99 when predicted using either a threshold or a linear model. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Using thresholds based on risk of cardiovascular disease to target treatment for hypertension: modelling events averted and number treated

    PubMed Central

    Baker, Simon; Priest, Patricia; Jackson, Rod

    2000-01-01

    Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to lower blood pressure in primary care. PMID:10710577

  1. Viral Load Criteria and Threshold Optimization to Improve HIV Incidence Assay Characteristics - A CEPHIA Analysis

    PubMed Central

    Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex

    2016-01-01

    Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561

  2. Monopolar Detection Thresholds Predict Spatial Selectivity of Neural Excitation in Cochlear Implants: Implications for Speech Recognition

    PubMed Central

    2016-01-01

    The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658

  3. A geographic analysis of population density thresholds in the influenza pandemic of 1918-19.

    PubMed

    Chandra, Siddharth; Kassens-Noor, Eva; Kuljanin, Goran; Vertalka, Joshua

    2013-02-20

    Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918-19 in India, where over 15 million people died in the short span of less than one year. Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918-19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold.

  4. A geographic analysis of population density thresholds in the influenza pandemic of 1918–19

    PubMed Central

    2013-01-01

    Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498

  5. Cost and threshold analysis of an HIV/STI/hepatitis prevention intervention for young men leaving prison: Project START.

    PubMed

    Johnson, A P; Macgowan, R J; Eldridge, G D; Morrow, K M; Sosman, J; Zack, B; Margolis, A

    2013-10-01

    The objectives of this study were to: (a) estimate the costs of providing a single-session HIV prevention intervention and a multi-session intervention, and (b) estimate the number of HIV transmissions that would need to be prevented for the intervention to be cost-saving or cost-effective (threshold analysis). Project START was evaluated with 522 young men aged 18-29 years released from eight prisons located in California, Mississippi, Rhode Island, and Wisconsin. Cost data were collected prospectively. Costs per participant were $689 for the single-session comparison intervention, and ranged from $1,823 to 1,836 for the Project START multi-session intervention. From the incremental threshold analysis, the multi-session intervention would be cost-effective if it prevented one HIV transmission for every 753 participants compared to the single-session intervention. Costs are comparable with other HIV prevention programs. Program managers can use these data to gauge costs of initiating these HIV prevention programs in correctional facilities.

  6. Concentration–Response Function for Ozone and Daily Mortality: Results from Five Urban and Five Rural U.K. Populations

    PubMed Central

    Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross

    2012-01-01

    Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173

  7. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  8. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.

  9. Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.

    PubMed

    Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth

    2018-03-26

    When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.

  10. C-5 Reliability Enhancement and Re-engining Program (C-5 RERP)

    DTIC Science & Technology

    2015-12-01

    Production Estimate Current APB Production Objective/Threshold Demonstrated Performance Current Estimate Time To Climb/Initial Level Off 837,000 lbs...RCR - Runway Condition Reading SDD - System Design and Development SL - Sea Level C-5 RERP December 2015 SAR March 23, 2016 16:10:28 UNCLASSIFIED 12...5.3 5.3 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 7146.6 7135.7 N/A 6698.0 7694.1 7510.7 7066.6 Confidence Level Confidence Level of cost estimate

  11. Observations Regarding Scatter Fraction and NEC Measurements for Small Animal PET

    NASA Astrophysics Data System (ADS)

    Yang, Yongfeng; Cherry, S. R.

    2006-02-01

    The goal of this study was to evaluate the magnitude and origin of scattered radiation in a small-animal PET scanner and to assess the impact of these findings on noise equivalent count rate (NECR) measurements, a metric often used to optimize scanner acquisition parameters and to compare one scanner with another. The scatter fraction (SF) was measured for line sources in air and line sources placed within a mouse-sized phantom (25 mm /spl phi//spl times/70 mm) and a rat-sized phantom (60 mm /spl phi//spl times/150 mm) on the microPET II small-animal PET scanner. Measurements were performed for lower energy thresholds ranging from 150-450 keV and a fixed upper energy threshold of 750 keV. Four different methods were compared for estimating the SF. Significant scatter fractions were measured with just the line source in the field of view, with the spatial distribution of these events consistent with scatter from the gantry and room environment. For mouse imaging, this component dominates over object scatter, and the measured SF is strongly method dependent. The environmental SF rapidly increases as the lower energy threshold decreases and can be more than 30% for an open energy window of 150-750 keV. The object SF originating from the mouse phantom is about 3-4% and does not change significantly as the lower energy threshold increases. The object SF for the rat phantom ranges from 10 to 35% for different energy windows and increases as the lower energy threshold decreases. Because the measured SF is highly dependent on the method, and there is as yet no agreed upon standard for animal PET, care must be exercised when comparing NECR for small objects between different scanners. Differences may be methodological rather than reflecting any relevant difference in the performance of the scanner. Furthermore, these results have implications for scatter correction methods when the majority of the detected scatter does not arise from the object itself.

  12. Is it valid to calculate the 3-kilohertz threshold by averaging 2 and 4 kilohertz?

    PubMed

    Gurgel, Richard K; Popelka, Gerald R; Oghalai, John S; Blevins, Nikolas H; Chang, Kay W; Jackler, Robert K

    2012-07-01

    Many guidelines for reporting hearing results use the threshold at 3 kilohertz (kHz), a frequency not measured routinely. This study assessed the validity of estimating the missing 3-kHz threshold by averaging the measured thresholds at 2 and 4 kHz. The estimated threshold was compared to the measured threshold at 3 kHz individually and when used in the pure-tone average (PTA) of 0.5, 1, 2, and 3 kHz in audiometric data from 2170 patients. The difference between the estimated and measured thresholds for 3 kHz was within ± 5 dB in 72% of audiograms, ± 10 dB in 91%, and within ± 20 dB in 99% (correlation coefficient r = 0.965). The difference between the PTA threshold using the estimated threshold compared with using the measured threshold at 3 kHz was within ± 5 dB in 99% of audiograms (r = 0.997). The estimated threshold accurately approximates the measured threshold at 3 kHz, especially when incorporated into the PTA.

  13. Age structure and mortality of walleyes in Kansas reservoirs: Use of mortality caps to establish realistic management objectives

    USGS Publications Warehouse

    Quist, M.C.; Stephen, J.L.; Guy, C.S.; Schultz, R.D.

    2004-01-01

    Age structure, total annual mortality, and mortality caps (maximum mortality thresholds established by managers) were investigated for walleye Sander vitreus (formerly Stizostedion vitreum) populations sampled from eight Kansas reservoirs during 1991-1999. We assessed age structure by examining the relative frequency of different ages in the population; total annual mortality of age-2 and older walleyes was estimated by use of a weighted catch curve. To evaluate the utility of mortality caps, we modeled threshold values of mortality by varying growth rates and management objectives. Estimated mortality thresholds were then compared with observed growth and mortality rates. The maximum age of walleyes varied from 5 to 11 years across reservoirs. Age structure was dominated (???72%) by walleyes age 3 and younger in all reservoirs, corresponding to ages that were not yet vulnerable to harvest. Total annual mortality rates varied from 40.7% to 59.5% across reservoirs and averaged 51.1% overall (SE = 2.3). Analysis of mortality caps indicated that a management objective of 500 mm for the mean length of walleyes harvested by anglers was realistic for all reservoirs with a 457-mm minimum length limit but not for those with a 381-mm minimum length limit. For a 500-mm mean length objective to be realized for reservoirs with a 381-mm length limit, managers must either reduce mortality rates (e.g., through restrictive harvest regulations) or increase growth of walleyes. When the assumed objective was to maintain the mean length of harvested walleyes at current levels, the observed annual mortality rates were below the mortality cap for all reservoirs except one. Mortality caps also provided insight on management objectives expressed in terms of proportional stock density (PSD). Results indicated that a PSD objective of 20-40 was realistic for most reservoirs. This study provides important walleye mortality information that can be used for monitoring or for inclusion into population models; these results can also be combined with those of other studies to investigate large-scale differences in walleye mortality. Our analysis illustrates the utility of mortality caps for monitoring walleye populations and for establishing realistic management goals.

  14. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  15. Bayesian inference and assessment for rare-event bycatch in marine fisheries: a drift gillnet fishery case study.

    PubMed

    Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E

    2015-03-01

    Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.

  16. Perceptual color difference metric including a CSF based on the perception threshold

    NASA Astrophysics Data System (ADS)

    Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.

  17. THE DIFFERENTIAL HEPATOTOXICITY AND CYTOCHROME P450 RESPONSE OF F344 RATS TO THE THREE ISOMERS OF DICHLOROBENZENE

    EPA Science Inventory

    The acute hepatotoxicity and response of hepatic cytochrome P450 to treatment with the three isomers of dichlorobenzene (DCB) have been investigated. The objectives were to estimate toxic thresholds and to further e1ucidate the role of cytochrome P450 in the metabolism and toxici...

  18. Strengthening the Validity of Population-Based Suicide Rate Comparisons: An Illustration Using U.S. Military and Civilian Data

    ERIC Educational Resources Information Center

    Eaton, Karen M.; Messer, Stephen C.; Garvey Wilson, Abigail L.; Hoge, Charles W.

    2006-01-01

    The objectives of this study were to generate precise estimates of suicide rates in the military while controlling for factors contributing to rate variability such as demographic differences and classification bias, and to develop a simple methodology for the determination of statistically derived thresholds for detecting significant rate…

  19. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  1. Selecting the best tone-pip stimulus-envelope time for estimating an objective middle-latency response threshold for low- and middle-tone sensorineural hearing losses.

    PubMed

    Xu, Z M; De Vel, E; Vinck, B; Van Cauwenberge, P

    1995-01-01

    The effects of rise-fall and plateau times for the Pa component of the middle-latency response (MLR) were investigated in normally hearing subjects, and an objective MLR threshold was measured in patients with low- and middle-tone hearing losses, using a selected stimulus-envelope time. Our results showed that the stimulus-envelope time (the rise-fall time and plateau time groups) affected the Pa component of the MLR (quality was determined by the (chi 2-test and amplitude by the F-test). The 4-2-4 tone-pips produced good Pa quality by visual inspection. However, our data revealed no statistically significant Na-Pa amplitude differences between the two subgroups studied when comparing the 2- and 4-ms rise-fall times and the 0- and 2-ms plateau times. In contrast, Na-Pa became significantly smaller from the 4-ms to the 6-ms rise-fall time and from the 2-ms to the 4-ms plateau time (paired t-test). This result allowed us to select the 2- or 4-ms rise-fall time and the 0- or 2-ms plateau time without influencing amplitude. Analysis of the stimulus spectral characteristics demonstrated that a rise-fall time of at least 2ms could prevent spectral splatter and indicated that a stimulus with a 5-ms rise-fall time had a greater frequency-specificity than a stimulus of 2-ms rise-fall time. When considering the synchronous discharge and frequency-specificity of MLR, our findings show that a rise-fall time of four periods with a plateau of two periods is an acceptable compromise for estimating the objective MLR threshold.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.

  3. Wine grape cultivar influence on the performance of models that predict the lower threshold canopy temperature of a water stress index

    USDA-ARS?s Scientific Manuscript database

    The calculation of a thermal based Crop Water Stress Index (CWSI) requires an estimate of canopy temperature under non-water stressed conditions. The objective of this study was to assess the influence of different wine grape cultivars on the performance of models that predict canopy temperature non...

  4. Neurosensory findings among electricians with self-reported remaining symptoms after an electrical injury: A case series.

    PubMed

    Rådman, Lisa; Gunnarsson, Lars-Gunnar; Nilsagård, Ylva; Nilsson, Tohr

    2016-12-01

    Symptoms described in previous studies indicate that electrical injury can cause longstanding injuries to the neurosensory nerves. The aim of the present case series was to objectively assess the profile of neurosensory dysfunction in electricians in relation to high voltage or low voltage electrical injury and the "no-let-go phenomenon". Twenty-three Swedish male electricians exposed to electrical injury were studied by using a battery of clinical instruments, including quantitative sensory testing (QST). The clinical test followed a predetermined order of assessments: thermal perceptions thresholds, vibration perception thresholds, tactile gnosis (the Shape and Texture Identification test), manual dexterity (Purdue Pegboard Test), and grip strength. In addition, pain was studied by means of a questionnaire, and a colour chart was used for estimation of white fingers. The main findings in the present case series were reduced thermal perceptions thresholds, where half of the group showed abnormal values for warm thermal perception and/or cold thermal perception. Also, the tactile gnosis and manual dexterity were reduced. High voltage injury was associated with more reduced sensibility compared to those with low voltage. Neurosensory injury can be objectively assessed after an electrical injury by using QST with thermal perception thresholds. The findings are consistent with injuries to small nerve fibres. In the clinical setting thermal perception threshold is therefore recommended, in addition to tests of tactile gnosis and manual dexterity (Purdue Pegboard). Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  5. Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection

    NASA Astrophysics Data System (ADS)

    Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei

    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.

  6. The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease.

    PubMed

    Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly

    2016-12-01

    To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.

  7. Identification Code of Interstellar Cloud within IRAF

    NASA Astrophysics Data System (ADS)

    Lee, Youngung; Jung, Jae Hoon; Kim, Hyun-Goo

    1997-12-01

    We present a code which identifies individual clouds in crowded region using IMFORT interface within Image Reduction and Analysis Facility(IRAF). We define a cloud as an object composed of all pixels in longitude, latitude, and velocity that are simply connected and that lie above some threshold temperature. The code searches the whole pixels of the data cube in efficient way to isolate individual clouds. Along with identification of clouds it is designed to estimate their mean values of longitudes, latitudes, and velocities. In addition, a function of generating individual images(or cube data) of identified clouds is added up. We also present identified individual clouds using a 12CO survey data cube of Galactic Anticenter Region(Lee et al. 1997) as a test example. We used a threshold temperature of 5 sigma rms noise level of the data. With a higher threshold temperature, we isolated subclouds of a huge cloud identified originally. As the most important parameter to identify clouds is the threshold value, its effect to the size and velocity dispersion is discussed rigorously.

  8. Perceptual thresholds for non-ideal diffuse field reverberation.

    PubMed

    Romblom, David; Guastavino, Catherine; Depalle, Philippe

    2016-11-01

    The objective of this study is to understand listeners' sensitivity to directional variations in non-ideal diffuse field reverberation. An ABX discrimination test was conducted using a semi-spherical 28-loudspeaker array; perceptual thresholds were estimated by systematically varying the level of a segment of loudspeakers for lateral, height, and frontal conditions. The overall energy was held constant using a gain compensation scheme. When compared to an ideal diffuse field, the perceptual threshold for detection is -2.5 dB for the lateral condition, -6.8 dB for the height condition, and -3.2 dB for the frontal condition. Measurements of the experimental stimuli were analyzed using a Head and Torso Simulator as well as with opposing cardioid microphones aligned on the three Cartesian axes. Additionally, opposing cardioid measurements made in an acoustic space demonstrate that level differences corresponding to the perceptual thresholds can be found in practice. These results suggest that non-ideal diffuse field reverberation may be a previously unrecognized component of spatial impression.

  9. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  10. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    DOE PAGES

    Burr, Tom; Hamada, Michael S.; Howell, John; ...

    2013-01-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less

  11. Bayesian estimation of sensitivity and specificity of a milk pregnancy-associated glycoprotein-based ELISA and of transrectal ultrasonographic exam for diagnosis of pregnancy at 28-45 days following breeding in dairy cows.

    PubMed

    Dufour, Simon; Durocher, Jean; Dubuc, Jocelyn; Dendukuri, Nandini; Hassan, Shereen; Buczinski, Sébastien

    2017-05-01

    Using a milk sample for pregnancy diagnosis in dairy cattle is extremely convenient due to the low technical inputs required for collection of biological materials. Determining accuracy of a novel pregnancy diagnostic test that relies on a milk sample is, however, difficult since no gold standard test is available for comparison. The objective of the current study was to estimate diagnostic accuracy of the milk PAG-based ELISA and of transrectal ultrasonographic (TUS) exam for determining pregnancy status of individual dairy cows using a methodology suited for test validation in the absence of gold standard. Secondary objectives were to evaluate whether test accuracy varies with cow's characteristics and to identify the optimal ELISA optical density threshold for PAG test interpretation. Cows (n=519) from 18 commercial dairies tested with both TUS and PAG between 28 and 45days following breeding were included in the study. Other covariates (number of days since breeding, parity, and daily milk production) hypothesized to affect TUS or PAG test accuracy were measured. A Bayesian hierarchical latent class model (LCM) methodology assuming conditional independence between tests was used to obtain estimates of tests' sensitivities (Se) and specificities (Sp), to evaluate impact of covariates on these, and to compute misclassification costs across a range of ELISA thresholds. Very little disagreement was observed between tests with only 23 cows yielding discordant results. Using the LCM model with non-informative priors for tests accuracy parameters, median (95% credibility intervals [CI]) TUS Se and Sp estimates of 0.96 (0.91, 1.00) and 0.99 (0.97, 1.0) were obtained. For the PAG test, median (95% CI) Se of 0.99 (0.98, 1.00) and Sp of 0.95 (0.89, 1.0) were observed. The impact of adjusting for conditional dependence between tests was negligible. Test accuracy of the PAG test varied slightly by parity number. When assuming false negative to false positive costs ratio≥3:1, the optimal ELISA optical density threshold allowing minimization of misclassification costs was 0.25. In conclusion, both TUS and PAG showed excellent accuracy for pregnancy diagnosis in dairy cows. When using the PAG test, a threshold of 0.25 could be used for test interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A comparison of underwater hearing sensitivity in bottlenose dolphins (Tursiops truncatus) determined by electrophysiological and behavioral methods.

    PubMed

    Houser, Dorian S; Finneran, James J

    2006-09-01

    Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.

  13. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  14. Effects of Reducing the Frequency and Duration Criteria for Binge Eating on Lifetime Prevalence of Bulimia Nervosa and Binge Eating Disorder: Implications for DSM-5

    PubMed Central

    Trace, Sara E.; Thornton, Laura M.; Root, Tammy L.; Mazzeo, Suzanne E.; Lichtenstein, Paul; Pedersen, Nancy L.; Bulik, Cynthia M.

    2011-01-01

    Objective We assessed the impact of reducing the binge eating frequency and duration thresholds on the diagnostic criteria for bulimia nervosa (BN) and binge eating disorder (BED). Method We estimated the lifetime population prevalence of BN and BED in 13,295 female twins from the Swedish Twin study of Adults: Genes and Environment employing a range of frequency and duration thresholds. External validation (risk to co-twin) was used to investigate empirical evidence for an optimal binge eating frequency threshold. Results The lifetime prevalence estimates of BN and BED increased linearly as the frequency criterion decreased. As the required duration increased, the prevalence of BED decreased slightly. Discontinuity in co-twin risk was observed in BN between at least four times per month and at least five times per month. This model could not be fit for BED. Discussion The proposed changes to the DSM-5 binge eating frequency and duration criteria would allow for better detection of binge eating pathology without resulting in a markedly higher lifetime prevalence of BN or BED. PMID:21882218

  15. Local health care expenditure plans and their opportunity costs.

    PubMed

    Karlsberg Schaffer, Sarah; Sussex, Jon; Devlin, Nancy; Walker, Andrew

    2015-09-01

    In the UK, approval decisions by Health Technology Assessment bodies are made using a cost per quality-adjusted life year (QALY) threshold, the value of which is based on little empirical evidence. We test the feasibility of estimating the "true" value of the threshold in NHS Scotland using information on marginal services (those planned to receive significant (dis)investment). We also explore how the NHS makes spending decisions and the role of cost per QALY evidence in this process. We identify marginal services using NHS Board-level responses to the 2012/13 Budget Scrutiny issued by the Scottish Government, supplemented with information on prioritisation processes derived from interviews with Finance Directors. We search the literature for cost-effectiveness evidence relating to marginal services. The cost-effectiveness estimates of marginal services vary hugely and thus it was not possible to obtain a reliable estimate of the threshold. This is unsurprising given the finding that cost-effectiveness evidence is rarely used to justify expenditure plans, which are driven by a range of other factors. Our results highlight the differences in objectives between HTA bodies and local health service decision makers. We also demonstrate that, even if it were desirable, the use of cost-effectiveness evidence at local level would be highly challenging without extensive investment in health economics resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Estimating the impact of adopting the revised United Kingdom acetaminophen treatment nomogram in the U.S. population.

    PubMed

    Levine, Michael; Stellpflug, Sam; Pizon, Anthony F; Traub, Stephen; Vohra, Rais; Wiegand, Timothy; Traub, Nicole; Tashman, David; Desai, Shoma; Chang, Jamie; Nathwani, Dhruv; Thomas, Stephen

    2017-07-01

    Acetaminophen toxicity is common in clinical practice. In recent years, several European countries have lowered the treatment threshold, which has resulted in increased number of patients being treated at a questionable clinical benefit. The primary objective of this study is to estimate the cost and associated burden to the United States (U.S.) healthcare system, if such a change were adopted in the U.S. This study is a retrospective review of all patients age 14 years or older who were admitted to one of eight different hospitals located throughout the U.S. with acetaminophen exposures during a five and a half year span, encompassing from 1 January 2008 to 30 June 2013. Those patients who would be treated with the revised nomogram, but not the current nomogram were included. The cost of such treatment was extrapolated to a national level. 139 subjects were identified who would be treated with the revised nomogram, but not the current nomogram. Extrapolating these numbers nationally, an additional 4507 (95%CI 3641-8751) Americans would be treated annually for acetaminophen toxicity. The cost of lowering the treatment threshold is estimated to be $45 million (95%CI 36,400,000-87,500,000) annually. Adopting the revised treatment threshold in the U.S. would result in a significant cost, yet provide an unclear clinical benefit.

  17. Lowering thresholds for speed limit enforcement impairs peripheral object detection and increases driver subjective workload.

    PubMed

    Bowden, Vanessa K; Loft, Shayne; Tatasciore, Monica; Visser, Troy A W

    2017-01-01

    Speed enforcement reduces incidences of speeding, thus reducing traffic accidents. Accordingly, it has been argued that stricter speed enforcement thresholds could further improve road safety. Effective speed monitoring however requires driver attention and effort, and human information-processing capacity is limited. Emphasizing speed monitoring may therefore reduce resource availability for other aspects of safe vehicle operation. We investigated whether lowering enforcement thresholds in a simulator setting would introduce further competition for limited cognitive and visual resources. Eighty-four young adult participants drove under conditions where they could be fined for travelling 1, 6, or 11km/h over a 50km/h speed-limit. Stricter speed enforcement led to greater subjective workload and significant decrements in peripheral object detection. These data indicate that the benefits of reduced speeding with stricter enforcement may be at least partially offset by greater mental demands on drivers, reducing their responses to safety-critical stimuli on the road. It is likely these results under-estimate the impact of stricter speed enforcement on real-world drivers who experience significantly greater pressures to drive at or above the speed limit. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Tolerance for High Flavanol Cocoa Powder in Semisweet Chocolate

    PubMed Central

    Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.

    2013-01-01

    Endogenous polyphenolic compounds in cacao impart both bitter and astringent characteristics to chocolate confections. While an increase in these compounds may be desirable from a health perspective, they are generally incongruent with consumer expectations. Traditionally, chocolate products undergo several processing steps (e.g., fermentation and roasting) that decrease polyphenol content, and thus bitterness. The objective of this study was to estimate group rejection thresholds for increased content of cocoa powder produced from under-fermented cocoa beans in a semisweet chocolate-type confection. The group rejection threshold was equivalent to 80.7% of the non-fat cocoa solids coming from the under-fermented cocoa powder. Contrary to expectations, there were no differences in rejection thresholds when participants were grouped based on their self-reported preference for milk or dark chocolate, indicating that these groups react similarly to an increase in high cocoa flavanol containing cocoa powder. PMID:23792967

  19. Tolerance for high flavanol cocoa powder in semisweet chocolate.

    PubMed

    Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E

    2013-06-21

    Endogenous polyphenolic compounds in cacao impart both bitter and astringent characteristics to chocolate confections. While an increase in these compounds may be desirable from a health perspective, they are generally incongruent with consumer expectations. Traditionally, chocolate products undergo several processing steps (e.g., fermentation and roasting) that decrease polyphenol content, and thus bitterness. The objective of this study was to estimate group rejection thresholds for increased content of cocoa powder produced from under-fermented cocoa beans in a semisweet chocolate-type confection. The group rejection threshold was equivalent to 80.7% of the non-fat cocoa solids coming from the under-fermented cocoa powder. Contrary to expectations, there were no differences in rejection thresholds when participants were grouped based on their self-reported preference for milk or dark chocolate, indicating that these groups react similarly to an increase in high cocoa flavanol containing cocoa powder.

  20. Rational use of Xpert testing in patients with presumptive TB: clinicians should be encouraged to use the test-treat threshold.

    PubMed

    Decroo, Tom; Henríquez-Trujillo, Aquiles R; De Weggheleire, Anja; Lynen, Lutgarde

    2017-10-11

    A recently published Ugandan study on tuberculosis (TB) diagnosis in HIV-positive patients with presumptive smear-negative TB, which showed that out of 90 patients who started TB treatment, 20% (18/90) had a positive Xpert MTB/RIF (Xpert) test, 24% (22/90) had a negative Xpert test, and 56% (50/90) were started without Xpert testing. Although Xpert testing was available, clinicians did not use it systematically. Here we aim to show more objectively the process of clinical decision-making. First, we estimated that pre-test probability of TB, or the prevalence of TB in smear-negative HIV infected patients with signs of presumptive TB in Uganda, was 17%. Second, we argue that the treatment threshold, the probability of disease at which the utility of treating and not treating is the same, and above which treatment should be started, should be determined. In Uganda, the treatment threshold was not yet formally established. In Rwanda, the calculated treatment threshold was 12%. Hence, one could argue that the threshold was reached without even considering additional tests. Still, Xpert testing can be useful when the probability of disease is above the treatment threshold, but only when a negative Xpert result can lower the probability of disease enough to cross the treatment threshold. This occurs when the pre-test probability is lower than the test-treat threshold, the probability of disease at which the utility of testing and the utility of treating without testing is the same. We estimated that the test-treatment threshold was 28%. Finally, to show the effect of the presence or absence of arguments on the probability of TB, we use confirming and excluding power, and a log10 odds scale to combine arguments. If the pre-test probability is above the test-treat threshold, empirical treatment is justified, because even a negative Xpert will not lower the post-test probability below the treatment threshold. However, Xpert testing for the diagnosis of TB should be performed in patients for whom the probability of TB was lower than the test-treat threshold. Especially in resource constrained settings clinicians should be encouraged to take clinical decisions and use scarce resources rationally.

  1. Assessing the Electrode-Neuron Interface with the Electrically Evoked Compound Action Potential, Electrode Position, and Behavioral Thresholds.

    PubMed

    DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg

    2016-06-01

    Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.

  2. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  3. Event-related potential measures of gap detection threshold during natural sleep.

    PubMed

    Muller-Gass, Alexandra; Campbell, Kenneth

    2014-08-01

    The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  5. A 500-kiloton airburst over Chelyabinsk and an enhanced hazard from small impactors

    NASA Astrophysics Data System (ADS)

    Brown, P. G.; Assink, J. D.; Astiz, L.; Blaauw, R.; Boslough, M. B.; Borovička, J.; Brachet, N.; Brown, D.; Campbell-Brown, M.; Ceranna, L.; Cooke, W.; de Groot-Hedlin, C.; Drob, D. P.; Edwards, W.; Evers, L. G.; Garces, M.; Gill, J.; Hedlin, M.; Kingery, A.; Laske, G.; Le Pichon, A.; Mialle, P.; Moser, D. E.; Saffer, A.; Silber, E.; Smets, P.; Spalding, R. E.; Spurný, P.; Tagliaferri, E.; Uren, D.; Weryk, R. J.; Whitaker, R.; Krzeminski, Z.

    2013-11-01

    Most large (over a kilometre in diameter) near-Earth asteroids are now known, but recognition that airbursts (or fireballs resulting from nuclear-weapon-sized detonations of meteoroids in the atmosphere) have the potential to do greater damage than previously thought has shifted an increasing portion of the residual impact risk (the risk of impact from an unknown object) to smaller objects. Above the threshold size of impactor at which the atmosphere absorbs sufficient energy to prevent a ground impact, most of the damage is thought to be caused by the airburst shock wave, but owing to lack of observations this is uncertain. Here we report an analysis of the damage from the airburst of an asteroid about 19 metres (17 to 20 metres) in diameter southeast of Chelyabinsk, Russia, on 15 February 2013, estimated to have an energy equivalent of approximately 500 (+/-100) kilotons of trinitrotoluene (TNT, where 1 kiloton of TNT = 4.185×1012 joules). We show that a widely referenced technique of estimating airburst damage does not reproduce the observations, and that the mathematical relations based on the effects of nuclear weapons--almost always used with this technique--overestimate blast damage. This suggests that earlier damage estimates near the threshold impactor size are too high. We performed a global survey of airbursts of a kiloton or more (including Chelyabinsk), and find that the number of impactors with diameters of tens of metres may be an order of magnitude higher than estimates based on other techniques. This suggests a non-equilibrium (if the population were in a long-term collisional steady state the size-frequency distribution would either follow a single power law or there must be a size-dependent bias in other surveys) in the near-Earth asteroid population for objects 10 to 50 metres in diameter, and shifts more of the residual impact risk to these sizes.

  6. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  7. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  8. Diversity Outbred Mice Identify Population-Based Exposure Thresholds and Genetic Factors that Influence Benzene-Induced Genotoxicity

    PubMed Central

    Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.

    2014-01-01

    Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237–245; http://dx.doi.org/10.1289/ehp.1408202 PMID:25376053

  9. Objective Diagnosis of Cervical Cancer by Tissue Protein Profile Analysis

    NASA Astrophysics Data System (ADS)

    Patil, Ajeetkumar; Bhat, Sujatha; Rai, Lavanya; Kartha, V. B.; Chidangil, Santhosh

    2011-07-01

    Protein profiles of homogenized normal cervical tissue samples from hysterectomy subjects and cancerous cervical tissues from biopsy samples collected from patients with different stages of cervical cancer were recorded using High Performance Liquid Chromatography coupled with Laser Induced Fluorescence (HPLC-LIF). The Protein profiles were subjected to Principle Component Analysis to derive statistically significant parameters. Diagnosis of sample types were carried out by matching three parameters—scores of factors, squared residuals, and Mahalanobis Distance. ROC and Youden's Index curves for calibration standards were used for objective estimation of the optimum threshold for decision making and performance.

  10. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.

    PubMed

    Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D

    2007-06-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  11. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  12. Electroconvulsive therapy stimulus titration: Not all it seems.

    PubMed

    Rosenman, Stephen J

    2018-05-01

    To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.

  13. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  14. Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Pang; Yu, Yue

    2017-05-01

    This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..

  15. A real signal and its states

    NASA Astrophysics Data System (ADS)

    Basiladze, S. G.

    2017-05-01

    The paper describes the general physical theory of signals, carriers of information, which supplements Shannon's abstract classical theory and is applicable in much broader fields, including nuclear physics. It is shown that in the absence of classical noise its place should be taken by the physical threshold of signal perception for objects of both macrocosm and microcosm. The signal perception threshold allows the presence of subthreshold (virtual) signal states. For these states, Boolean algebra of logic ( A = 0/1) is transformed into the "algebraic logic" of probabilities (0 ≤ a ≤ 1). The similarity and difference of virtual states of macroand microsignals are elucidated. "Real" and "quantum" information for computers is considered briefly. The maximum information transmission rate is estimated based on physical constants.

  16. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  17. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322

  18. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  19. Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.

  20. Computing mammographic density from a multiple regression model constructed with image-acquisition parameters from a full-field digital mammographic unit

    PubMed Central

    Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.

    2009-01-01

    Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343

  1. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  2. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  3. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  4. Bone-anchored Hearing Aids: correlation between pure-tone thresholds and outcome in three user groups.

    PubMed

    Pfiffner, Flurin; Kompis, Martin; Stieger, Christof

    2009-10-01

    To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.

  5. Extended high-frequency thresholds in college students: effects of music player use and other recreational noise.

    PubMed

    Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K

    2013-09-01

    Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.

  6. Computed radiography utilizing laser-stimulated luminescence: detectability of simulated low-contrast radiographic objects.

    PubMed

    Higashida, Y; Moribe, N; Hirata, Y; Morita, K; Doudanuki, S; Sonoda, Y; Katsuda, N; Hiai, Y; Misumi, W; Matsumoto, M

    1988-01-01

    Threshold contrasts of low-contrast objects with computed radiography (CR) images were compared with those of blue and green emitting screen-film systems by employing the 18-alternative forced choice (18-AFC) procedure. The dependence of the threshold contrast on the incident X-ray exposure and also the object size was studied. The results indicated that the threshold contrasts of CR system were comparable to those of blue and green screen-film systems and decreased with increasing object size, and increased with decreasing incident X-ray exposure. The increase in threshold contrasts was small when the relative incident exposure decreased from 1 to 1/4, and was large when incident exposure was decreased further.

  7. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    PubMed

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

  8. [Correlation between the thresholds acquired with ASSR and sbjective thresholds in cochlear implants in prelingually deaf children].

    PubMed

    Wang, Z; Gu, J; Jiang, X J

    2017-04-20

    Objective: To learn the relationship between the auditory steady state responses(ASSR)threshold and C-level and behavior T-level in cochlear implants in prelingually deaf children. Method: One hundred and twelve children with Nucleus CI24R(CA) cochlear implants were divided into residual hearing group and no residual hearing group on the basis of the results of ASSR before operation in this study.Compare the difference between the two groups in C-level and behavior T-level one year after operation. Result: There was difference in C-level and behavior T-level between residual hearing group and no residual hearing group( P <0.05 or P <0.01). Conclusion: According to the results of ASSR before operation,we can estimate the effect of cochlear implants,providing reference for the selection of choosing operating ears,and providing a reasonable expectation for physicians and parents of the patients. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  9. An approach to derive groundwater and stream threshold values for total nitrogen and ensure good ecological status of associated aquatic ecosystems - example from a coastal catchment to a vulnerable Danish estuary.

    NASA Astrophysics Data System (ADS)

    Hinsby, Klaus; Markager, Stiig; Kronvang, Brian; Windolf, Jørgen; Sonnenborg, Torben; Sørensen, Lærke

    2015-04-01

    Nitrate, which typically makes up the major part (~>90%) of dissolved inorganic nitrogen in groundwater and surface water, is the most frequent pollutant responsible for European groundwater bodies failing to meet the good status objectives of the European Water Framework Directive generally when comparing groundwater monitoring data with the nitrate quality standard of the Groundwater Directive (50 mg/l = the WHO drinking water standard). Still, while more than 50 % of the European surface water bodies do not meet the objective of good ecological status "only" 25 % of groundwater bodies do not meet the objective of good chemical status according to the river basin management plans reported by the EU member states. However, based on a study on interactions between groundwater, streams and a Danish estuary we argue that nitrate threshold values for aerobic groundwater often need to be significantly below the nitrate quality standard to ensure good ecological status of associated surface water bodies, and hence that the chemical status of European groundwater is worse than indicated by the present assessments. Here we suggest a methodology for derivation of groundwater and stream threshold values for total nitrogen ("nitrate") in a coastal catchment based on assessment of maximum acceptable nitrogen loadings (thresholds) to the associated vulnerable estuary. The applied method use existing information on agricultural practices and point source emissions in the catchment, groundwater, stream quantity and quality monitoring data that all feed data to an integrated groundwater and surface water modelling tool enabling us to conduct an assessment of total nitrogen loads and threshold concentrations derived to ensure/restore good ecological status of the investigated estuary. For the catchment to the Horsens estuary in Denmark we estimate the stream and groundwater thresholds for total nitrogen to be about 13 and 27 mg/l (~ 12 and 25 mg/l of nitrate). The shown example of deriving nitrogen threshold concentrations is for groundwater and streams in a coastal catchment discharging to a vulnerable estuary in Denmark, but the principles may be applied to large river basins with sub-catchments in several countries such as e.g. the Danube or the Rhine. In this case the relevant countries need to collaborate on derivation of nitrogen thresholds based on e.g. maximum acceptable nitrogen loadings to the Black Sea / the North Sea, and finally agree on thresholds for different parts of the river basin. Phosphorus is another nutrient which frequently results in or contributes to the eutrophication of surface waters. The transport and retention processes of total phosphorus (TP) is more complex than for nitrate (or alternatively total N), and presently we are able to establish TP thresholds for streams but not for groundwater. Derivation of TP thresholds is covered in an accompanying paper by Kronvang et al.

  10. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  11. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  12. Comparability of children's sedentary time estimates derived from wrist worn GENEActiv and hip worn ActiGraph accelerometer thresholds.

    PubMed

    Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J

    2018-03-28

    To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  13. Fitting psychometric functions using a fixed-slope parameter: an advanced alternative for estimating odor thresholds with data generated by ASTM E679.

    PubMed

    Peng, Mei; Jaeger, Sara R; Hautus, Michael J

    2014-03-01

    Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.

  14. Determination of a Testing Threshold for Lumbar Puncture in the Diagnosis of Subarachnoid Hemorrhage after a Negative Head Computed Tomography: A Decision Analysis.

    PubMed

    Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H

    2016-10-01

    The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.

  15. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  16. Destruction of Peptides and Nucleosides in Reactions with Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Muftakhov, M. V.; Shchukin, P. V.

    2018-05-01

    Mass-spectrometry of negative ions is used to study dissociative electron capture by molecules of several nucleosides, simplest di- and tripeptides, and modified dipeptides. Energy domains and efficiencies of dissociative capture are determined for the objects under study, and threshold energies of several fragmentation processes are estimated. It is shown that cytidine and peptides are stable against fragmentation due to simple bond breaking at electron energies ranging from 0 to 1 eV.

  17. Airborne Warning and Control System Block 40/45 Upgrade (AWACS Blk 40/45 Upgrade)

    DTIC Science & Technology

    2013-12-01

    MILCON - Military Construction N /A - Not Applicable O&S - Operating and Support Oth - Other PAUC - Program Acquisition Unit Cost PB - President’s...and Evaluation SAR - Selected Acquisition Report Sch - Schedule Spt - Support TBD - To Be Determined TY - Then Year UCR - Unit Cost Reporting AWACS Blk...Objective and Threshold, and Current Estimate for IOT&E milestone have been corrected from June 2011 to June 2012, to reflect the actual date of

  18. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  19. Generalised form of a power law threshold function for rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders

    2010-05-01

    The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.

  20. Systematic distortions of perceptual stability investigated using immersive virtual reality

    PubMed Central

    Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew

    2010-01-01

    Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248

  1. MRI-leukoaraiosis thresholds and the phenotypic expression of dementia

    PubMed Central

    Mitchell, Sandra M.; Brumback, Babette; Tanner, Jared J.; Schmalfuss, Ilona; Lamar, Melissa; Giovannetti, Tania; Heilman, Kenneth M.; Libon, David J.

    2012-01-01

    Objective: To examine the concept of leukoaraiosis thresholds on working memory, visuoconstruction, memory, and language in dementia. Methods: A consecutive series of 83 individuals with insidious onset/progressive dementia clinically diagnosed with Alzheimer disease (AD) or small vessel vascular dementia (VaD) completed neuropsychological measures assessing working memory, visuoconstruction, episodic memory, and language. A clinical MRI scan was used to quantify leukoaraiosis, total white matter, hippocampus, lacune, and intracranial volume. We performed analyses to detect the lowest level of leukoaraiosis associated with impairment on the neuropsychological measures. Results: Leukoaraiosis ranged from 0.63% to 23.74% of participants' white matter. Leukoaraiosis explained a significant amount of variance in working memory performance when it involved 3% or more of the white matter with curve estimations showing the relationship to be nonlinear in nature. Greater leukoaraiosis (13%) was implicated for impairment in visuoconstruction. Relationships between leukoaraiosis, episodic memory, and language measures were linear or flat. Conclusions: Leukoaraiosis involves specific threshold points for working memory and visuoconstructional tests in AD/VaD spectrum dementia. These data underscore the need to better understand the threshold at which leukoaraiosis affects and alters the phenotypic expression in insidious onset dementia syndromes. PMID:22843264

  2. Characterization of Renal Glucose Reabsorption in Response to Dapagliflozin in Healthy Subjects and Subjects With Type 2 Diabetes

    PubMed Central

    DeFronzo, Ralph A.; Hompesch, Marcus; Kasichayanula, Sreeneeranj; Liu, Xiaoni; Hong, Ying; Pfister, Marc; Morrow, Linda A.; Leslie, Bruce R.; Boulton, David W.; Ching, Agatha; LaCreta, Frank P.; Griffen, Steven C.

    2013-01-01

    OBJECTIVE To examine the effect of dapagliflozin, a sodium-glucose cotransporter 2 (SGLT2) inhibitor, on the major components of renal glucose reabsorption (decreased maximum renal glucose reabsorptive capacity [TmG], increased splay, and reduced threshold), using the pancreatic/stepped hyperglycemic clamp (SHC) technique. RESEARCH DESIGN AND METHODS Subjects with type 2 diabetes (n = 12) and matched healthy subjects (n = 12) underwent pancreatic/SHC (plasma glucose range 5.5–30.5 mmol/L) at baseline and after 7 days of dapagliflozin treatment. A pharmacodynamic model was developed to describe the major components of renal glucose reabsorption for both groups and then used to estimate these parameters from individual glucose titration curves. RESULTS At baseline, type 2 diabetic subjects had elevated TmG, splay, and threshold compared with controls. Dapagliflozin treatment reduced the TmG and splay in both groups. However, the most significant effect of dapagliflozin was a reduction of the renal threshold for glucose excretion in type 2 diabetic and control subjects. CONCLUSIONS The SGLT2 inhibitor dapagliflozin improves glycemic control in diabetic patients by reducing the TmG and threshold at which glucose is excreted in the urine. PMID:23735727

  3. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  4. Deactivating stimulation sites based on low-rate thresholds improves spectral ripple and speech reception thresholds in cochlear implant users.

    PubMed

    Zhou, Ning

    2017-03-01

    The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.

  5. Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold.

    PubMed

    Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark

    2015-02-01

    Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability that it is < £30,000 per QALY is 0.97. Additional 'structural' uncertainty suggests, on balance, that the central or best estimate is, if anything, likely to be an overestimate. The health effects of changes in expenditure are greater when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.

  6. On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Junran; Flagg, Cody; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne

    2015-12-01

    Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS-NIR, 350-2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400-700 nm) and the short-wavelength infrared (SWIR) area (1100-2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.

  7. Sugar maple growth in relation to nutrition and stress in the northeastern United States.

    PubMed

    Long, Robert P; Horsley, Stephen B; Hallett, Richard A; Bailey, Scott W

    2009-09-01

    Sugar maple, Acer saccharum, decline disease is incited by multiple disturbance factors when imbalanced calcium (Ca), magnesium (Mg), and manganese (Mn) act as predisposing stressors. Our objective in this study was to determine whether factors affecting sugar maple health also affect growth as estimated by basal area increment (BAI). We used 76 northern hardwood stands in northern Pennsylvania, New York, Vermont, and New Hampshire, USA, and found that sugar maple growth was positively related to foliar concentrations of Ca and Mg and stand level estimates of sugar maple crown health during a high stress period from 1987 to 1996. Foliar nutrient threshold values for Ca, Mg, and Mn were used to analyze long-term BAI trends from 1937 to 1996. Significant (P < or = 0.05) nutrient threshold-by-time interactions indicate changing growth in relation to nutrition during this period. Healthy sugar maples sampled in the 1990s had decreased growth in the 1970s, 10-20 years in advance of the 1980s and 1990s decline episode in Pennsylvania. Even apparently healthy stands that had no defoliation, but had below-threshold amounts of Ca or Mg and above-threshold Mn (from foliage samples taken in the mid 1990s), had decreasing growth by the 1970s. Co-occurring black cherry, Prunus serotina, in a subset of the Pennsylvania and New York stands, showed opposite growth responses with greater growth in stands with below-threshold Ca and Mg compared with above-threshold stands. Sugar maple growing on sites with the highest concentrations of foliar Ca and Mg show a general increase in growth from 1937 to 1996 while other stands with lower Ca and Mg concentrations show a stable or decreasing growth trend. We conclude that acid deposition induced changes in soil nutrient status that crossed a threshold necessary to sustain sugar maple growth during the 1970s on some sites. While nutrition of these elements has not been considered in forest management decisions, our research shows species specific responses to Ca and Mg that may reduce health and growth of sugar maple or change species composition, if not addressed.

  8. Cost-effectiveness of the faecal immunochemical test at a range of positivity thresholds compared with the guaiac faecal occult blood test in the NHS Bowel Cancer Screening Programme in England

    PubMed Central

    Halloran, Stephen

    2017-01-01

    Objectives Through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP), men and women in England aged between 60 and 74 years are invited for colorectal cancer (CRC) screening every 2 years using the guaiac faecal occult blood test (gFOBT). The aim of this analysis was to estimate the cost–utility of the faecal immunochemical test for haemoglobin (FIT) compared with gFOBT for a cohort beginning screening aged 60 years at a range of FIT positivity thresholds. Design We constructed a cohort-based Markov state transition model of CRC disease progression and screening. Screening uptake, detection, adverse event, mortality and cost data were taken from BCSP data and national sources, including a recent large pilot study of FIT screening in the BCSP. Results Our results suggest that FIT is cost-effective compared with gFOBT at all thresholds, resulting in cost savings and quality-adjusted life years (QALYs) gained over a lifetime time horizon. FIT was cost-saving (p<0.001) and resulted in QALY gains of 0.014 (95% CI 0.012 to 0.017) at the base case threshold of 180 µg Hb/g faeces. Greater health gains and cost savings were achieved as the FIT threshold was decreased due to savings in cancer management costs. However, at lower thresholds, FIT was also associated with more colonoscopies (increasing from 32 additional colonoscopies per 1000 people invited for screening for FIT 180 µg Hb/g faeces to 421 additional colonoscopies per 1000 people invited for screening for FIT 20 µg Hb/g faeces over a 40-year time horizon). Parameter uncertainty had limited impact on the conclusions. Conclusions This is the first published economic analysis of FIT screening in England using data directly comparing FIT with gFOBT in the NHS BSCP. These results for a cohort starting screening aged 60 years suggest that FIT is highly cost-effective at all thresholds considered. Further modelling is needed to estimate economic outcomes for screening across all age cohorts simultaneously. PMID:29079605

  9. LINKING NUTRIENTS TO ALTERATIONS IN AQUATIC LIFE ...

    EPA Pesticide Factsheets

    This report estimates the natural background and ambient concentrations of primary producer abundance indicators in California wadeable streams, identifies thresholds of adverse effects of nutrient-stimulated primary producer abundance on benthic macroinvertebrate and algal community structure in CA wadeable streams, and evaluates existing nutrient-algal response models for CA wadeable streams (Tetra Tech 2006), with recommendations for improvements. This information will be included in an assessment of the science forming the basis of recommendations for stream nutrient criteria for the state of California. The objectives of the project are three-fold: 1. Estimate the natural background and ambient concentrations of nutrients and candidate indicators of primary producer abundance in California wadeable streams; 2. Explore relationships and identify thresholds of adverse effects of nutrient concentrations and primary producer abundance on indicators of aquatic life use in California wadeable streams; and 3. Evaluate the Benthic Biomass Spreadsheet Tool (BBST) for California wadeable streams using existing data sets, and recommend avenues for refinement. The intended outcome of this study is NOT final regulatory endpoints for nutrient and response indicators for California wadeable streams.

  10. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  11. Genetic parameters of body weight and ascites in broilers: effect of different incidence rates of ascites syndrome.

    PubMed

    Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A

    2017-02-01

    1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.

  12. Impaired hand size estimation in CRPS.

    PubMed

    Peltz, Elena; Seifert, Frank; Lanz, Stefan; Müller, Rüdiger; Maihöfner, Christian

    2011-10-01

    A triad of clinical symptoms, ie, autonomic, motor and sensory dysfunctions, characterizes complex regional pain syndromes (CRPS). Sensory dysfunction comprises sensory loss or spontaneous and stimulus-evoked pain. Furthermore, a disturbance in the body schema may occur. In the present study, patients with CRPS of the upper extremity and healthy controls estimated their hand sizes on the basis of expanded or compressed schematic drawings of hands. In patients with CRPS we found an impairment in accurate hand size estimation; patients estimated their own CRPS-affected hand to be larger than it actually was when measured objectively. Moreover, overestimation correlated significantly with disease duration, neglect score, and increase of two-point-discrimination-thresholds (TPDT) compared to the unaffected hand and to control subjects' estimations. In line with previous functional imaging studies in CRPS patients demonstrating changes in central somatotopic maps, we suggest an involvement of the central nervous system in this disruption of the body schema. Potential cortical areas may be the primary somatosensory and posterior parietal cortices, which have been proposed to play a critical role in integrating visuospatial information. CRPS patients perceive their affected hand to be bigger than it is. The magnitude of this overestimation correlates with disease duration, decreased tactile thresholds, and neglect-score. Suggesting a disrupted body schema as the source of this impairment, our findings corroborate the current assumption of a CNS involvement in CRPS. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  13. Physician and patient willingness to pay for electronic cardiovascular disease management.

    PubMed

    Deal, Ken; Keshavjee, Karim; Troyan, Sue; Kyba, Robert; Holbrook, Anne Marie

    2014-07-01

    Cardiovascular disease (CVD) is an important target for electronic decision support. We examined the potential sustainability of an electronic CVD management program using a discrete choice experiment (DCE). Our objective was to estimate physician and patient willingness-to-pay (WTP) for the current and enhanced programs. Focus groups, expert input and literature searches decided the attributes to be evaluated for the physician and patient DCEs, which were carried out using a Web-based program. Hierarchical Bayes analysis estimated preference coefficients for each respondent and latent class analysis segmented each sample. Simulations were used to estimate WTP for each of the attributes individually and for an enhanced vascular management system. 144 participants (70 physicians, 74 patients) completed the DCE. Overall, access speed to updated records and monthly payments for a nurse coordinator were the main determinants of physician choices. Two distinctly different segments of physicians were identified - one very sensitive to monthly subscription fee and speed of updating the tracker with new patient data and the other very sensitive to the monthly cost of the nurse coordinator and government billing incentives. Patient choices were most significantly influenced by the yearly subscription cost. The estimated physician WTP was slightly above the estimated threshold for sustainability while the patient WTP was below. Current willingness to pay for electronic cardiovascular disease management should encourage innovation to provide economies of scale in program development, delivery and maintenance to meet sustainability thresholds. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Model for Estimating the Threshold Mechanical Stability of Structural Cartilage Grafts Used in Rhinoplasty

    PubMed Central

    Zemek, Allison; Garg, Rohit; Wong, Brian J. F.

    2014-01-01

    Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022

  15. The pathophysiology of glossal pain in patients with iron deficiency and anemia.

    PubMed

    Osaki, T; Ueta, E; Arisawa, K; Kitamura, Y; Matsugi, N

    1999-11-01

    It is well known that prolonged anemia causes atrophy of tongue papillae, glossal pain, and dysphagia, but it is uncertain whether iron (Fe) deficiency induces glossal pain without any objective manifestation. To resolve this matter, the relationship between Fe deficiency and glossal pain was examined. Eighteen patients with Fe deficiency and 7 anemic patients manifesting spontaneous irritation or pain of the tongue without any objective abnormalities participated in this study. To ascertain the cause of glossal pain and the oral pathophysiology in Fe deficiency and anemia, peripheral blood was examined and the glossal pain threshold and salivary flow rates (SFRs) were estimated along with Candida albicans cell culture tests. Compared with patients with Fe deficiency, those with anemia had a longer history of tongue pain. In patients with anemia, painful areas of the tongue were more numerous than in patients with Fe deficiency. Pain thresholds were decreased in the painful portions, and both nonstimulated and stimulated SFRs were suppressed. Each patient was treated with oral Fe; within 2 months, most patients exhibited increased serum ferritin level (P< 0.02, paired t-test), pain threshold (P < 0.05) and salivation (P < 0.05) and glossal pain subsided. Fe deficiency causes glossal pain and the degree of glossal pain increases as Fe deficiency advances to anemia, manifesting hyposalivation and abnormalities of glossal papillae.

  16. The impact of cochlear fine structure on hearing thresholds and DPOAE levels

    NASA Astrophysics Data System (ADS)

    Lee, Jungmee; Long, Glenis; Talmadge, Carrick L.

    2004-05-01

    Although otoacoustic emissions (OAE) are used as clinical and research tools, the correlation between OAE behavioral estimates of hearing status is not large. In normal-hearing individuals, the level of OAEs can vary as much as 30 dB when the frequency is changed less than 5%. These pseudoperiodic variations of OAE level with frequency are known as fine structure. Hearing thresholds measured with high-frequency resolution reveals a similar (up to 15 dB) fine structure. We examine the impact of OAE and threshold fine structures on the prediction of auditory thresholds from OAE levels. Distortion product otoacoustic emissions (DPOAEs) were measured with sweeping primary tones. Psychoacoustic detection thresholds were measured using pure tones, sweep tones, FM tones, and narrow-band noise. Sweep DPOAE and narrow-band threshold estimates provide estimates that are less influenced by cochlear fine structure and should lead to a higher correlation between OAE levels and psychoacoustic thresholds. [Research supported by PSC CUNY, NIDCD, National Institute on Disability and Rehabilitation Research in U.S. Department of Education, and The Ministry of Education in Korea.

  17. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  18. Utilizing Objective Drought Thresholds to Improve Drought Monitoring with the SPI

    NASA Astrophysics Data System (ADS)

    Leasor, Z. T.; Quiring, S. M.

    2017-12-01

    Drought is a prominent climatic hazard in the south-central United States. Droughts are frequently monitored using the severity categories determined by the U.S. Drought Monitor (USDM). This study uses the Standardized Precipitation Index (SPI) to conduct a drought frequency analysis across Texas, Oklahoma, and Kansas using PRISM precipitation data from 1900-2015. The SPI is shown to be spatiotemporally variant across the south-central United States. In particular, utilizing the default USDM severity thresholds may underestimate drought severity in arid regions. Objective drought thresholds were implemented by fitting a CDF to each location's SPI distribution. This approach results in a more homogeneous distribution of drought frequencies across each severity category. Results also indicate that it may be beneficial to develop objective drought thresholds for each season and SPI timescale. This research serves as a proof-of-concept and demonstrates how drought thresholds should be objectively developed so that they are appropriate for each climatic region.

  19. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    PubMed

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  20. On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?

    PubMed

    Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro

    2016-01-01

    Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Dental age estimation: the role of probability estimates at the 10 year threshold.

    PubMed

    Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham

    2014-08-01

    The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.

  2. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  3. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  4. Diagnostic accuracy of spot urinary protein and albumin to creatinine ratios for detection of significant proteinuria or adverse pregnancy outcome in patients with suspected pre-eclampsia: systematic review and meta-analysis

    PubMed Central

    Morris, R K; Riley, R D; Doug, M; Deeks, J J

    2012-01-01

    Objective To determine the diagnostic accuracy of two “spot urine” tests for significant proteinuria or adverse pregnancy outcome in pregnant women with suspected pre-eclampsia. Design Systematic review and meta-analysis. Data sources Searches of electronic databases 1980 to January 2011, reference list checking, hand searching of journals, and contact with experts. Inclusion criteria Diagnostic studies, in pregnant women with hypertension, that compared the urinary spot protein to creatinine ratio or albumin to creatinine ratio with urinary protein excretion over 24 hours or adverse pregnancy outcome. Study characteristics, design, and methodological and reporting quality were objectively assessed. Data extraction Study results relating to diagnostic accuracy were extracted and synthesised using multivariate random effects meta-analysis methods. Results Twenty studies, testing 2978 women (pregnancies), were included. Thirteen studies examining protein to creatinine ratio for the detection of significant proteinuria were included in the multivariate analysis. Threshold values for protein to creatinine ratio ranged between 0.13 and 0.5, with estimates of sensitivity ranging from 0.65 to 0.89 and estimates of specificity from 0.63 to 0.87; the area under the summary receiver operating characteristics curve was 0.69. On average, across all studies, the optimum threshold (that optimises sensitivity and specificity combined) seems to be between 0.30 and 0.35 inclusive. However, no threshold gave a summary estimate above 80% for both sensitivity and specificity, and considerable heterogeneity existed in diagnostic accuracy across studies at most thresholds. No studies looked at protein to creatinine ratio and adverse pregnancy outcome. For albumin to creatinine ratio, meta-analysis was not possible. Results from a single study suggested that the most predictive result, for significant proteinuria, was with the DCA 2000 quantitative analyser (>2 mg/mmol) with a summary sensitivity of 0.94 (95% confidence interval 0.86 to 0.98) and a specificity of 0.94 (0.87 to 0.98). In a single study of adverse pregnancy outcome, results for perinatal death were a sensitivity of 0.82 (0.48 to 0.98) and a specificity of 0.59 (0.51 to 0.67). Conclusion The maternal “spot urine” estimate of protein to creatinine ratio shows promising diagnostic value for significant proteinuria in suspected pre-eclampsia. The existing evidence is not, however, sufficient to determine how protein to creatinine ratio should be used in clinical practice, owing to the heterogeneity in test accuracy and prevalence across studies. Insufficient evidence is available on the use of albumin to creatinine ratio in this area. Insufficient evidence exists for either test to predict adverse pregnancy outcome. PMID:22777026

  5. Estimating phonation threshold pressure.

    PubMed

    Fisher, K V; Swank, P R

    1997-10-01

    Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.

  6. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  7. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  8. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  9. Discriminating the precipitation phase based on different temperature thresholds in the Songhua River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao

    2018-06-01

    Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.

  10. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  11. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  12. Threshold selection for classification of MR brain images by clustering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less

  13. Economic Evaluation of Dupilumab for the Treatment of Moderate-to-Severe Atopic Dermatitis in Adults.

    PubMed

    Kuznik, Andreas; Bégo-Le-Bagousse, Gaëlle; Eckert, Laurent; Gadkari, Abhijit; Simpson, Eric; Graham, Christopher N; Miles, LaStella; Mastey, Vera; Mahajan, Puneet; Sullivan, Sean D

    2017-12-01

    Dupilumab significantly improves signs and symptoms of atopic dermatitis (AD), including pruritus, symptoms of anxiety and depression, and health-related quality of life versus placebo in adults with moderate-to-severe AD. Since the cost-effectiveness of dupilumab has not been evaluated, the objective of this analysis was to estimate a value-based price range in which dupilumab would be considered cost-effective compared with supportive care (SC) for treatment of moderate-to-severe AD in an adult population. A health economic model was developed to evaluate from the US payer perspective the long-term costs and benefits of dupilumab treatment administered every other week (q2w). Dupilumab q2w was compared with SC; robustness of assumptions and results were tested using sensitivity and scenario analyses. Clinical data were derived from the dupilumab LIBERTY AD SOLO trials; healthcare use and cost data were from health insurance claims histories of adult patients with AD. The annual price of maintenance therapy with dupilumab to be considered cost-effective was estimated for decision thresholds of US$100,000 and $150,000 per quality-adjusted life-year (QALY) gained. In the base case, the annual maintenance price for dupilumab therapy to be considered cost-effective would be $28,770 at a $100,000 per QALY gained threshold, and $39,940 at a $150,000 threshold. Results were generally robust to parameter variations in one-way and probabilistic sensitivity analyses. Dupilumab q2w compared with SC is cost-effective for the treatment of moderate-to-severe AD in US adults at an annual price of maintenance therapy in the range of $29,000-$40,000 at the $100,000-$150,000 per QALY thresholds. Sanofi and Regeneron Pharmaceuticals, Inc.

  14. Marginally perceptible outcome feedback, motor learning and implicit processes.

    PubMed

    Masters, Rich S W; Maxwell, Jon P; Eves, Frank F

    2009-09-01

    Participants struck 500 golf balls to a concealed target. Outcome feedback was presented at the subjective or objective threshold of awareness of each participant or at a supraliminal threshold. Participants who received fully perceptible (supraliminal) feedback learned to strike the ball onto the target, as did participants who received feedback that was only marginally perceptible (subjective threshold). Participants who received feedback that was not perceptible (objective threshold) showed no learning. Upon transfer to a condition in which the target was unconcealed, performance increased in both the subjective and the objective threshold condition, but decreased in the supraliminal condition. In all three conditions, participants reported minimal declarative knowledge of their movements, suggesting that deliberate hypothesis testing about how best to move in order to perform the motor task successfully was disrupted by the impoverished disposition of the visual outcome feedback. It was concluded that sub-optimally perceptible visual feedback evokes implicit processes.

  15. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  16. Global gray-level thresholding based on object size.

    PubMed

    Ranefall, Petter; Wählby, Carolina

    2016-04-01

    In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  17. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  18. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  19. Cost-effectiveness of integrated collaborative care for comorbid major depression in patients with cancer☆

    PubMed Central

    Duarte, A.; Walker, J.; Walker, S.; Richardson, G.; Holm Hansen, C.; Martin, P.; Murray, G.; Sculpher, M.; Sharpe, M.

    2015-01-01

    Objectives Comorbid major depression is associated with reduced quality of life and greater use of healthcare resources. A recent randomised trial (SMaRT, Symptom Management Research Trials, Oncology-2) found that a collaborative care treatment programme (Depression Care for People with Cancer, DCPC) was highly effective in treating depression in patients with cancer. This study aims to estimate the cost-effectiveness of DCPC compared with usual care from a health service perspective. Methods Costs were estimated using UK national unit cost estimates and health outcomes measured using quality-adjusted life-years (QALYs). Incremental cost-effectiveness of DCPC compared with usual care was calculated and scenario analyses performed to test alternative assumptions on costs and missing data. Uncertainty was characterised using cost-effectiveness acceptability curves. The probability of DCPC being cost-effective was determined using the UK National Institute for Health and Care Excellence's (NICE) cost-effectiveness threshold range of £20,000 to £30,000 per QALY gained. Results DCPC cost on average £631 more than usual care per patient, and resulted in a mean gain of 0.066 QALYs, yielding an incremental cost-effectiveness ratio of £9549 per QALY. The probability of DCPC being cost-effective was 0.9 or greater at cost-effectiveness thresholds above £20,000 per QALY for the base case and scenario analyses. Conclusions Compared with usual care, DCPC is likely to be cost-effective at the current thresholds used by NICE. This study adds to the weight of evidence that collaborative care treatment models are cost-effective for depression, and provides new evidence regarding their use in specialist medical settings. PMID:26652589

  20. Current and Projected Burden of Disease From High Ambient Temperature in Korea.

    PubMed

    Chung, Soo Eun; Cheong, Hae-Kwan; Park, Jae-Hyun; Kim, Jong-Hun; Han, Hyunjin

    2017-10-01

    The objective of the present study was to estimate the current and projected burden of disease from high ambient temperature using population-based data sources of nationwide mortality and morbidity in Korea. Disability-adjusted life years (DALY) were estimated using noninjury-related deaths, and cerebrovascular and cardiovascular diseases from recently released nationwide health and mortality databases. Years of life lost and years lost due to disability were measured based on the point prevalence and number of deaths during the study period. Future DALY attributable to heat waves were estimated from projected populations, and temperature predictions for the years 2030 and 2050 were under Representative Concentration Pathways (RCP) 4.5 and 8.5 with summertime temperatures above threshold. Relative risks (RR) of total mortality and of cardiovascular disease were 1.02 (95% CI, 1.01, 1.02) and 1.08 (95% CI, 1.06, 1.09) for each 1°C increase in temperature above threshold, respectively. The morbidity of heat-related disease was RR 1.67 (95% CI, 1.64, 1.68) for each 1°C increase in temperature above threshold. DALY for all-cause death were 0.49 DALY/1000 in 2011, 0.71 (0.71) DALY/1000 in 2030 and 0.77 (1.72) DALY/1000 in 2050 based on RCP 4.5 (RCP 8.5). DALY for cardio- and cerebrovascular diseases were 1.24 DALY/1000 in 2011, 1.63 (1.82) DALY/1000 in 2030, and 1.76 (3.66) DALY/1000 in 2050 based on RCP 4.5 (RCP 8.5). Future excess mortality due to high ambient temperature is expected to be profound in Korea. Efforts to mitigate climate change can provide substantial health benefits via reducing heat-related mortality.

  1. On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy

    USGS Publications Warehouse

    Li, Junran; Flagg, Cody B.; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne

    2015-01-01

    Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS–NIR, 350–2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400–700 nm) and the short-wavelength infrared (SWIR) area (1100–2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.

  2. Using Low Levels of Stochastic Vestibular Stimulation to Improve Balance Function

    PubMed Central

    Goel, Rahul; Kofman, Igor; Jeevarajan, Jerome; De Dios, Yiri; Cohen, Helen S.; Bloomberg, Jacob J.; Mulavara, Ajitkumar P.

    2015-01-01

    Low-level stochastic vestibular stimulation (SVS) has been associated with improved postural responses in the medio-lateral (ML) direction, but its effect in improving balance function in both the ML and anterior-posterior (AP) directions has not been studied. In this series of studies, the efficacy of applying low amplitude SVS in 0–30 Hz range between the mastoids in the ML direction on improving cross-planar balance function was investigated. Forty-five (45) subjects stood on a compliant surface with their eyes closed and were instructed to maintain a stable upright stance. Measures of stability of the head, trunk, and whole body were quantified in ML, AP and combined APML directions. Results show that binaural bipolar SVS given in the ML direction significantly improved balance performance with the peak of optimal stimulus amplitude predominantly in the range of 100–500 μA for all the three directions, exhibiting stochastic resonance (SR) phenomenon. Objective perceptual and body motion thresholds as estimates of internal noise while subjects sat on a chair with their eyes closed and were given 1 Hz bipolar binaural sinusoidal electrical stimuli were also measured. In general, there was no significant difference between estimates of perceptual and body motion thresholds. The average optimal SVS amplitude that improved balance performance (peak SVS amplitude normalized to perceptual threshold) was estimated to be 46% in ML, 53% in AP, and 50% in APML directions. A miniature patch-type SVS device may be useful to improve balance function in people with disabilities due to aging, Parkinson’s disease or in astronauts returning from long-duration space flight. PMID:26295807

  3. Antihypertensive drugs: a perspective on pharmaceutical price erosion and its impact on cost-effectiveness.

    PubMed

    Refoios Camejo, Rodrigo; McGrath, Clare; Herings, Ron; Meerding, Willem-Jan; Rutten, Frans

    2012-01-01

    When comparators' prices decrease due to market competition and loss of exclusivity, the incremental clinical effectiveness required for a new technology to be cost-effective is expected to increase; and/or the minimum price at which it will be funded will tend to decrease. This may be, however, either unattainable physiologically or financially unviable for drug development. The objective of this study is to provide an empirical basis for this discussion by estimating the potential for price decreases to impact on the cost-effectiveness of new therapies in hypertension. Cost-effectiveness at launch was estimated for all antihypertensive drugs launched between 1998 and 2008 in the United Kingdom using hypothetical degrees of incremental clinical effectiveness within the methodologic framework applied by the UK National Institute for Health and Clinical Excellence. Incremental cost-effectiveness ratios were computed and compared with funding thresholds. In addition, the levels of incremental clinical effectiveness required to achieve specific cost-effectiveness thresholds at given prices were estimated. Significant price decreases were observed for existing drugs. This was shown to markedly affect cost-effectiveness of technologies entering the market. The required incremental clinical effectiveness was in many cases greater than physiologically possible so, as a consequence, a number of products might not be available today if current methods of economic appraisal had been applied. We conclude that the definition of cost-effectiveness thresholds is fundamental in promoting efficient innovation. Our findings demonstrate that comparator price attrition has the potential to put pressure in the pharmaceutical research model and presents a challenge to new therapies being accepted for funding. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. Influence of Spatial and Chromatic Noise on Luminance Discrimination.

    PubMed

    Miquilini, Leticia; Walker, Natalie A; Odigie, Erika A; Guimarães, Diego Leite; Salomão, Railson Cruz; Lacerda, Eliza Maria Costa Brito; Cortes, Maria Izabel Tentes; de Lima Silveira, Luiz Carlos; Fitzgerald, Malinda E C; Ventura, Dora Fix; Souza, Givago Silva

    2017-12-05

    Pseudoisochromatic figures are designed to base discrimination of a chromatic target from a background solely on the chromatic differences. This is accomplished by the introduction of luminance and spatial noise thereby eliminating these two dimensions as cues. The inverse rationale could also be applied to luminance discrimination, if spatial and chromatic noise are used to mask those cues. In this current study estimate of luminance contrast thresholds were conducted using a novel stimulus, based on the use of chromatic and spatial noise to mask the use of these cues in a luminance discrimination task. This was accomplished by presenting stimuli composed of a mosaic of circles colored randomly. A Landolt-C target differed from the background only by the luminance. The luminance contrast thresholds were estimated for different chromatic noise saturation conditions and compared to luminance contrast thresholds estimated using the same target in a non-mosaic stimulus. Moreover, the influence of the chromatic content in the noise on the luminance contrast threshold was also investigated. Luminance contrast threshold was dependent on the chromaticity noise strength. It was 10-fold higher than thresholds estimated from non-mosaic stimulus, but they were independent of colour space location in which the noise was modulated. The present study introduces a new method to investigate luminance vision intended for both basic science and clinical applications.

  5. Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.

    PubMed

    Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew

    2018-01-01

    Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  6. An objective rationale for the choice of regularisation parameter with application to global multiple-frequency S-wave tomography

    NASA Astrophysics Data System (ADS)

    Zaroli, C.; Sambridge, M.; Lévêque, J.-J.; Debayle, E.; Nolet, G.

    2013-06-01

    In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global Multiple-Frequency Tomography (MFT), using a data set of 287 078 S-wave delay-times measured in five frequency bands (10, 15, 22, 34, 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise. In addition, a sharp change of behaviour of the model ℓ∞-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation. Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models (correlation superior to 98% up to spherical harmonic degree 80). The obtained tomographic model is displayed in mid lower-mantle (660-1910 km depth), and is shown to be compatible with three other recent global shear-velocity models. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of the Earth's mantle.

  7. An objective rationale for the choice of regularisation parameter with application to global multiple-frequency S-wave tomography

    NASA Astrophysics Data System (ADS)

    Zaroli, C.; Sambridge, M.; Lévêque, J.-J.; Debayle, E.; Nolet, G.

    2013-10-01

    In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global multiple-frequency tomography (MFT), using a data set of 287 078 S-wave delay times measured in five frequency bands (10, 15, 22, 34, and 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise. In addition, a sharp change of behaviour of the model ℓ∞-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation. Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models (correlation superior to 98% up to spherical harmonic degree 80). The obtained tomographic model is displayed in the mid lower-mantle (660-1910 km depth), and is shown to be compatible with three other recent global shear-velocity models. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of Earth's mantle.

  8. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    PubMed

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  9. Performance breakdown in optimal stimulus decoding

    NASA Astrophysics Data System (ADS)

    Kostal, Lubomir; Lansky, Petr; Pilarski, Stevan

    2015-06-01

    Objective. One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. Approach. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. Main results. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. Significance. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.

  10. Threshold analysis of reimbursing physicians for the application of fluoride varnish in young children.

    PubMed

    Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L

    2013-01-01

    Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.

  11. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  12. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    2017-03-01

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  13. Comparison of in-air evoked potential and underwater behavioral hearing thresholds in four bottlenose dolphins (Tursiops truncatus).

    PubMed

    Finneran, James J; Houser, Dorian S

    2006-05-01

    Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.

  14. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  15. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  16. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.

  17. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.

  18. Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold

    NASA Technical Reports Server (NTRS)

    Srivastava, M. K.; Temkin, A.

    1991-01-01

    The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.

  19. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  20. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  1. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  2. Missing the Mark? A Two Time Point Cohort Study Estimating Intestinal Parasite Prevalence in Informal Settlements in Lima, Peru.

    PubMed

    Cooper, Michael Townsend; Searing, Rapha A; Thompson, David M; Bard, David; Carabin, Hélène; Gonzales, Carlos; Zavala, Carmen; Woodson, Kyle; Naifeh, Monique

    2017-01-01

    Objectives: The World Health Organization's (WHO) recommendations list Peru as potentially needing prevention of soil-transmitted helminthiasis (STH). Prevalence of STH varies regionally and remains understudied in the newest informal settlements of the capital city, Lima. The purpose of this study was to evaluate the need for Mass Drug Administration (MDA) of antiparasitic drugs in the newest informal settlements of Lima. The aim of this study was to estimate the season-specific prevalence of STH to determine if these prevalence estimates met the WHO threshold for MDA in 3 informal settlements. Methods : A 2 time point cohort study was conducted among a sample of 140 children aged 1 to 10 years living in 3 purposively sampled informal settlements of Lima, Peru. Children were asked to provide 2 stool samples that were analyzed with the spontaneous sedimentation in tube technique. The season-specific prevalence proportions of MDA-targeted STH were estimated using a hidden (latent) Markov modeling approach to adjust for repeated measurements over the 2 seasons and the imperfect validity of the screening tests. Results : The prevalence of MDA targeted STH was low at 2.2% (95% confidence interval = 0.3% to 6%) and 3.8% (95% confidence interval = 0.7% to 9.3%) among children sampled in the summer and winter months, respectively, when using the most conservative estimate of test sensitivity. These estimates were below the WHO threshold for MDA (20%). Conclusions : Empiric treatment for STH by organizations active in the newest informal settlements is not supported by the data and could contribute to unnecessary medication exposures and poor allocation of resources.

  3. Missing the Mark? A Two Time Point Cohort Study Estimating Intestinal Parasite Prevalence in Informal Settlements in Lima, Peru

    PubMed Central

    Cooper, Michael Townsend; Searing, Rapha A.; Thompson, David M.; Bard, David; Carabin, Hélène; Gonzales, Carlos; Zavala, Carmen; Woodson, Kyle; Naifeh, Monique

    2017-01-01

    Objectives: The World Health Organization’s (WHO) recommendations list Peru as potentially needing prevention of soil-transmitted helminthiasis (STH). Prevalence of STH varies regionally and remains understudied in the newest informal settlements of the capital city, Lima. The purpose of this study was to evaluate the need for Mass Drug Administration (MDA) of antiparasitic drugs in the newest informal settlements of Lima. The aim of this study was to estimate the season-specific prevalence of STH to determine if these prevalence estimates met the WHO threshold for MDA in 3 informal settlements. Methods: A 2 time point cohort study was conducted among a sample of 140 children aged 1 to 10 years living in 3 purposively sampled informal settlements of Lima, Peru. Children were asked to provide 2 stool samples that were analyzed with the spontaneous sedimentation in tube technique. The season-specific prevalence proportions of MDA-targeted STH were estimated using a hidden (latent) Markov modeling approach to adjust for repeated measurements over the 2 seasons and the imperfect validity of the screening tests. Results: The prevalence of MDA targeted STH was low at 2.2% (95% confidence interval = 0.3% to 6%) and 3.8% (95% confidence interval = 0.7% to 9.3%) among children sampled in the summer and winter months, respectively, when using the most conservative estimate of test sensitivity. These estimates were below the WHO threshold for MDA (20%). Conclusions: Empiric treatment for STH by organizations active in the newest informal settlements is not supported by the data and could contribute to unnecessary medication exposures and poor allocation of resources. PMID:29152541

  4. Visual evoked potentials through night vision goggles.

    PubMed

    Rabin, J

    1994-04-01

    Night vision goggles (NVG's) have widespread use in military and civilian environments. NVG's amplify ambient illumination making performance possible when there is insufficient illumination for normal vision. While visual performance through NVG's is commonly assessed by measuring threshold functions such as visual acuity, few attempts have been made to assess vision through NVG's at suprathreshold levels of stimulation. Such information would be useful to better understand vision through NVG's across a range of stimulus conditions. In this study visual evoked potentials (VEP's) were used to evaluate vision through NVG's across a range of stimulus contrasts. The amplitude and latency of the VEP varied linearly with log contrast. A comparison of VEP's recorded with and without NVG's was used to estimate contrast attenuation through the device. VEP's offer an objective, electrophysiological tool to assess visual performance through NVG's at both threshold and suprathreshold levels of visual stimulation.

  5. Prevalence of Workers with Shifts in Hearing by Industry: A Comparison of OSHA and NIOSH Hearing Shift Criteria

    PubMed Central

    Masterson, Elizabeth A.; Sweeney, Marie Haring; Deddens, James A.; Themann, Christa L.; Wall, David K.

    2015-01-01

    Objective The purpose of this study was to compare the prevalence of workers with National Institute for Occupational Safety and Health significant threshold shifts (NSTS), Occupational Safety and Health Administration standard threshold shifts (OSTS), and with OSTS with age correction (OSTS-A), by industry using North American Industry Classification System codes. Methods 2001-2010 worker audiograms were examined. Prevalence and adjusted prevalence ratios for NSTS were estimated by industry. NSTS, OSTS and OSTS-A prevalences were compared by industry. Results 20% of workers had an NSTS, 14% had an OSTS and 6% had an OSTS-A. For most industries, the OSTS and OSTS-A criteria identified 28-36% and 66-74% fewer workers than the NSTS criteria, respectively. Conclusions Use of NSTS criteria allowing for earlier detection of shifts in hearing is recommended for improved prevention of occupational hearing loss. PMID:24662953

  6. Evidence-based Diagnostics: Adult Septic Arthritis

    PubMed Central

    Carpenter, Christopher R.; Schuur, Jeremiah D.; Everett, Worth W.; Pines, Jesse M.

    2011-01-01

    Background Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. Objectives The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Methods Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. Results The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 109–25 × 109/ L was 0.33; for 25 × 109–50 × 109/L, 1.06; for 50 × 109–100 × 109/L, 3.59; and exceeding 100 × 109/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (−LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 109/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Conclusions Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 109/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. PMID:21843213

  7. The coolest DA white dwarfs detected at soft X-ray wavelengths

    NASA Technical Reports Server (NTRS)

    Kidder, K. M.; Holberg, J. B.; Barstow, M. A.; Tweedy, R. W.; Wesemael, F.

    1992-01-01

    New soft X-ray/EUV photometric observations of the DA white dwarfs KPD 0631 + 1043 = WD 0631 + 107 and PG 1113 + 413 = WD 1113 + 413 are analyzed. Previously reported soft X-ray detections of three other DAs and the failure to detect a fourth DA in deep Exosat observations are investigated. New ground-based spectra are presented for all of the objects, with IUE Ly-alpha spectra for some. These data are used to constrain the effective temperatures and surface gravities. The improved estimates of these parameters are employed to refer a photospheric He abundance for the hotter objects and to elucidate an effective observational low-temperature threshold for the detection of pure hydrogen DA white dwarfs at soft X-ray wavelengths.

  8. A novel approach to estimation of the time to biomarker threshold: applications to HIV.

    PubMed

    Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc

    2016-11-01

    In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Estimation of Effect Thresholds for the Development of Water Quality Criteria

    EPA Science Inventory

    Biological and ecological effect thresholds can be used for determining safe levels of nontraditional stressors. The U.S. EPA Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (WQC) [36] uses a risk assessment approach to estimate effect thre...

  10. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...

  11. Association of daily asthma emergency department visits and hospital admissions with ambient air pollutants among the pediatric Medicaid population in Detroit: time-series and time-stratified case-crossover analyses with threshold effects.

    PubMed

    Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar

    2011-11-01

    Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Demand for Colonoscopy in Colorectal Cancer Screening Using a Quantitative Fecal Immunochemical Test and Age/Sex-Specific Thresholds for Test Positivity.

    PubMed

    Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi

    2018-06-01

    Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.

  13. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  14. Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    2016-09-01

    This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.

  15. Evaluation of bone formation in calcium phosphate scaffolds with μCT-method validation using SEM.

    PubMed

    Lewin, S; Barba, A; Persson, C; Franch, J; Ginebra, M-P; Öhman-Mägi, C

    2017-10-05

    There is a plethora of calcium phosphate (CaP) scaffolds used as synthetic substitutes to bone grafts. The scaffold performance is often evaluated from the quantity of bone formed within or in direct contact with the scaffold. Micro-computed tomography (μCT) allows three-dimensional evaluation of bone formation inside scaffolds. However, the almost identical x-ray attenuation of CaP and bone obtrude the separation of these phases in μCT images. Commonly, segmentation of bone in μCT images is based on gray scale intensity, with manually determined global thresholds. However, image analysis methods, and methods for manual thresholding in particular, lack standardization and may consequently suffer from subjectivity. The aim of the present study was to provide a methodological framework for addressing these issues. Bone formation in two types of CaP scaffold architectures (foamed and robocast), obtained from a larger animal study (a 12 week canine animal model) was evaluated by μCT. In addition, cross-sectional scanning electron microscopy (SEM) images were acquired as references to determine thresholds and to validate the result. μCT datasets were registered to the corresponding SEM reference. Global thresholds were then determined by quantitatively correlating the different area fractions in the μCT image, towards the area fractions in the corresponding SEM image. For comparison, area fractions were also quantified using global thresholds determined manually by two different approaches. In the validation the manually determined thresholds resulted in large average errors in area fraction (up to 17%), whereas for the evaluation using SEM references, the errors were estimated to be less than 3%. Furthermore, it was found that basing the thresholds on one single SEM reference gave lower errors than determining them manually. This study provides an objective, robust and less error prone method to determine global thresholds for the evaluation of bone formation in CaP scaffolds.

  16. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  17. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small imagemore » patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.« less

  18. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  19. Evaluation of different radon guideline values based on characterization of ecological risk and visualization of lung cancer mortality trends in British Columbia, Canada.

    PubMed

    Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B

    2015-11-19

    There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.

  20. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  1. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  2. Auditory steady-state responses to MM and exponential envelope AM(2)/FM stimuli in normal-hearing adults.

    PubMed

    D'haenens, Wendy; Dhooge, Ingeborg; De Vel, Eddy; Maes, Leen; Bockstael, Annelies; Vinck, Bart M

    2007-08-01

    The present study utilized a commercially available multiple auditory steady-state response (ASSR) system to test normal hearing adults (n=55). The primary objective was to evaluate the impact of the mixed modulation (MM) and the novel proposed exponential AM(2)/FM stimuli on the signal-to-noise ratio (SNR) and threshold estimation accuracy, through a within-subject comparison. The second aim was to establish a normative database for both stimulus types. The results demonstrated that the AM(2)/FM and MM stimulus had a similar effect on the SNR, whereas the ASSR threshold results revealed that the AM(2)/FM produced better thresholds than the MM stimulus for the 500, 1000, and 4000 Hz carrier frequency. The mean difference scores to tones of 500, 1000, 2000, and 4000 Hz were for the MM stimulus: 20+/-12, 14+/-9, 10+/-8, and 12+/-8 dB; and for the AM(2)/FM stimulus: 18+/-13, 12+/-8, 11+/-8, and 10+/-8 dB, respectively. The current research confirms that the AM(2)/FM stimulus can be used efficiently to test normal hearing adults.

  3. Empirical estimation of genome-wide significance thresholds based on the 1000 Genomes Project data set.

    PubMed

    Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori

    2016-10-01

    To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10 -8 , the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were P sig =3.24 × 10 -8 (AFR), 9.26 × 10 -8 (EUR), 1.83 × 10 -7 (AMR), 1.61 × 10 -7 (EAS) and 9.46 × 10 -8 (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded P sig =3.25 × 10 -8 (ALL) and 4.20 × 10 -8 (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10 -8 ) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples.

  4. Determination of Cost-Effectiveness Threshold for Health Care Interventions in Malaysia.

    PubMed

    Lim, Yen Wei; Shafie, Asrul Akmal; Chua, Gin Nie; Ahmad Hassali, Mohammed Azmi

    2017-09-01

    One major challenge in prioritizing health care using cost-effectiveness (CE) information is when alternatives are more expensive but more effective than existing technology. In such a situation, an external criterion in the form of a CE threshold that reflects the willingness to pay (WTP) per quality-adjusted life-year is necessary. To determine a CE threshold for health care interventions in Malaysia. A cross-sectional, contingent valuation study was conducted using a stratified multistage cluster random sampling technique in four states in Malaysia. One thousand thirteen respondents were interviewed in person for their socioeconomic background, quality of life, and WTP for a hypothetical scenario. The CE thresholds established using the nonparametric Turnbull method ranged from MYR12,810 to MYR22,840 (~US $4,000-US $7,000), whereas those estimated with the parametric interval regression model were between MYR19,929 and MYR28,470 (~US $6,200-US $8,900). Key factors that affected the CE thresholds were education level, estimated monthly household income, and the description of health state scenarios. These findings suggest that there is no single WTP value for a quality-adjusted life-year. The CE threshold estimated for Malaysia was found to be lower than the threshold value recommended by the World Health Organization. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST.

    PubMed

    Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger

    2018-05-10

    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.

  6. Visuomotor sensitivity to visual information about surface orientation.

    PubMed

    Knill, David C; Kersten, Daniel

    2004-03-01

    We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.

  7. Estimation of the geochemical threshold and its statistical significance

    USGS Publications Warehouse

    Miesch, A.T.

    1981-01-01

    A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.

  8. Note: A manifold ranking based saliency detection method for camera.

    PubMed

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  9. Bioclimatic Thresholds, Thermal Constants and Survival of Mealybug, Phenacoccus solenopsis (Hemiptera: Pseudococcidae) in Response to Constant Temperatures on Hibiscus

    PubMed Central

    Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi

    2013-01-01

    Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher

    Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less

  11. Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.

    PubMed

    Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John

    2018-03-01

    Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.

  12. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  13. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Geostatistical applications in environmental remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, R.N.; Purucker, S.T.; Lyon, B.F.

    1995-02-01

    Geostatistical analysis refers to a collection of statistical methods for addressing data that vary in space. By incorporating spatial information into the analysis, geostatistics has advantages over traditional statistical analysis for problems with a spatial context. Geostatistics has a history of success in earth science applications, and its popularity is increasing in other areas, including environmental remediation. Due to recent advances in computer technology, geostatistical algorithms can be executed at a speed comparable to many standard statistical software packages. When used responsibly, geostatistics is a systematic and defensible tool can be used in various decision frameworks, such as the Datamore » Quality Objectives (DQO) process. At every point in the site, geostatistics can estimate both the concentration level and the probability or risk of exceeding a given value. Using these probability maps can assist in identifying clean-up zones. Given any decision threshold and an acceptable level of risk, the probability maps identify those areas that are estimated to be above or below the acceptable risk. Those areas that are above the threshold are of the most concern with regard to remediation. In addition to estimating clean-up zones, geostatistics can assist in designing cost-effective secondary sampling schemes. Those areas of the probability map with high levels of estimated uncertainty are areas where more secondary sampling should occur. In addition, geostatistics has the ability to incorporate soft data directly into the analysis. These data include historical records, a highly correlated secondary contaminant, or expert judgment. The role of geostatistics in environmental remediation is a tool that in conjunction with other methods can provide a common forum for building consensus.« less

  15. Application of a Threshold Method to the TRMM Radar for the Estimation of Space-Time Rain Rate Statistics

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    1997-01-01

    One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.

  16. Comparison between ABR with click and narrow band chirp stimuli in children.

    PubMed

    Zirn, Stefan; Louza, Julia; Reiman, Viktor; Wittlinger, Natalie; Hempel, John-Martin; Schuster, Maria

    2014-08-01

    Click and chirp-evoked auditory brainstem responses (ABR) are applied for the estimation of hearing thresholds in children. The present study analyzes ABR thresholds across a large sample of children's ears obtained with both methods. The aim was to demonstrate the correlation between both methods using narrow band chirp and click stimuli. Click and chirp evoked ABRs were measured in 253 children aged from 0 to 18 years to determine their individual auditory threshold. The delay-compensated stimuli were narrow band CE chirps with either 2000 Hz or 4000 Hz center frequencies. Measurements were performed consecutively during natural sleep, and under sedation or general anesthesia. Threshold estimation was performed for each measurement by two experienced audiologists. Pearson-correlation analysis revealed highly significant correlations (r=0.94) between click and chirp derived thresholds for both 2 kHz and 4 kHz chirps. No considerable differences were observed either between different age ranges or gender. Comparing the thresholds estimated using ABR with click stimuli and chirp stimuli, only 0.8-2% for the 2000 Hz NB-chirp and 0.4-1.2% of the 4000 Hz NB-chirp measurements differed more than 15 dB for different degrees of hearing loss or normal hearing. The results suggest that either NB-chirp or click ABR is sufficient for threshold estimation. This holds for the chirp frequencies of 2000 Hz and 4000 Hz. The use of either click- or chirp-evoked ABR allows a reduction of recording time in young infants. Nevertheless, to cross-check the results of one of the methods, we recommend measurements with the other method as well. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Acoustic Reflexes in Normal-Hearing Adults, Typically Developing Children, and Children with Suspected Auditory Processing Disorder: Thresholds, Real-Ear Corrections, and the Role of Static Compliance on Estimates.

    PubMed

    Saxena, Udit; Allan, Chris; Allen, Prudence

    2017-06-01

    Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology

  18. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  19. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  20. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  1. Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images

    USGS Publications Warehouse

    Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.

    2011-01-01

    The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.

  2. Income Eligibility Thresholds, Premium Contributions, and Children's Coverage Outcomes: A Study of CHIP Expansions

    PubMed Central

    Gresenz, Carole Roan; Edgington, Sarah E; Laugesen, Miriam J; Escarce, José J

    2013-01-01

    Objective To understand the effects of Children's Health Insurance Program (CHIP) income eligibility thresholds and premium contribution requirements on health insurance coverage outcomes among children. Data Sources 2002–2009 Annual Social and Economic Supplements of the Current Population Survey linked to data from multiple secondary data sources. Study Design We use a selection correction model to simultaneously estimate program eligibility and coverage outcomes conditional upon eligibility. We simulate the effects of three premium schedules representing a range of generosity levels and the effects of income eligibility thresholds ranging from 200 to 400 percent of the federal poverty line. Principal Findings Premium contribution requirements decrease enrollment in public coverage and increase enrollment in private coverage, with larger effects for greater contribution levels. Our simulation results suggest minimal changes in coverage outcomes from eligibility expansions to higher income families under premium schedules that require more than a modest contribution (medium or high schedules). Conclusions Our simulation results are useful counterpoints to previous research that has estimated the average effect of program expansions as they were implemented without disentangling the effects of premiums or other program features. The sensitivity to premiums observed suggests that although contribution requirements may be effective in reducing crowd-out, they also have the potential, depending on the level of contribution required, to nullify the effects of CHIP expansions entirely. The persistence of uninsurance among children under the range of simulated scenarios points to the importance of Affordable Care Act provisions designed to make the process of obtaining coverage transparent and navigable. PMID:23398477

  3. Using CART to Identify Thresholds and Hierarchies in the Determinants of Funding Decisions.

    PubMed

    Schilling, Chris; Mortimer, Duncan; Dalziel, Kim

    2017-02-01

    There is much interest in understanding decision-making processes that determine funding outcomes for health interventions. We use classification and regression trees (CART) to identify cost-effectiveness thresholds and hierarchies in the determinants of funding decisions. The hierarchical structure of CART is suited to analyzing complex conditional and nonlinear relationships. Our analysis uncovered hierarchies where interventions were grouped according to their type and objective. Cost-effectiveness thresholds varied markedly depending on which group the intervention belonged to: lifestyle-type interventions with a prevention objective had an incremental cost-effectiveness threshold of $2356, suggesting that such interventions need to be close to cost saving or dominant to be funded. For lifestyle-type interventions with a treatment objective, the threshold was much higher at $37,024. Lower down the tree, intervention attributes such as the level of patient contribution and the eligibility for government reimbursement influenced the likelihood of funding within groups of similar interventions. Comparison between our CART models and previously published results demonstrated concurrence with standard regression techniques while providing additional insights regarding the role of the funding environment and the structure of decision-maker preferences.

  4. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  5. Compositional threshold for Nuclear Waste Glass Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-04-24

    Within the composition space of glasses, a distinct threshold appears to exist that separates "good" glasses, i.e., those which are sufficiently durable, from "bad" glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region.

  6. High-spatial resolution multispectral and panchromatic satellite imagery for mapping perennial desert plants

    NASA Astrophysics Data System (ADS)

    Alsharrah, Saad A.; Bruce, David A.; Bouabid, Rachid; Somenahalli, Sekhar; Corcoran, Paul A.

    2015-10-01

    The use of remote sensing techniques to extract vegetation cover information for the assessment and monitoring of land degradation in arid environments has gained increased interest in recent years. However, such a task can be challenging, especially for medium-spatial resolution satellite sensors, due to soil background effects and the distribution and structure of perennial desert vegetation. In this study, we utilised Pleiades high-spatial resolution, multispectral (2m) and panchromatic (0.5m) imagery and focused on mapping small shrubs and low-lying trees using three classification techniques: 1) vegetation indices (VI) threshold analysis, 2) pre-built object-oriented image analysis (OBIA), and 3) a developed vegetation shadow model (VSM). We evaluated the success of each approach using a root of the sum of the squares (RSS) metric, which incorporated field data as control and three error metrics relating to commission, omission, and percent cover. Results showed that optimum VI performers returned good vegetation cover estimates at certain thresholds, but failed to accurately map the distribution of the desert plants. Using the pre-built IMAGINE Objective OBIA approach, we improved the vegetation distribution mapping accuracy, but this came at the cost of over classification, similar to results of lowering VI thresholds. We further introduced the VSM which takes into account shadow for further refining vegetation cover classification derived from VI. The results showed significant improvements in vegetation cover and distribution accuracy compared to the other techniques. We argue that the VSM approach using high-spatial resolution imagery provides a more accurate representation of desert landscape vegetation and should be considered in assessments of desertification.

  7. Regional rainfall thresholds for landslide occurrence using a centenary database

    NASA Astrophysics Data System (ADS)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia

    2017-04-01

    Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].

  8. Cost-effectiveness of different strategies for diagnosis of uncomplicated urinary tract infections in women presenting in primary care

    PubMed Central

    Coupé, Veerle M. H.; Knottnerus, Bart J.; Geerlings, Suzanne E.; Moll van Charante, Eric P.; ter Riet, Gerben

    2017-01-01

    Background Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. Objective To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. Methods This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Results Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Conclusion Depending on decision makers’ willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI. PMID:29186185

  9. Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats

    PubMed Central

    Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen

    2009-01-01

    Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992

  10. Aging deteriorated perception of urge-to-cough without changing cough reflex threshold to citric acid in female never-smokers.

    PubMed

    Ebihara, Satoru; Ebihara, Takae; Kanezaki, Masashi; Gui, Peijun; Yamasaki, Miyako; Arai, Hiroyuki; Kohzuki, Masahiro

    2011-06-28

    The effect of aging on the cognitive aspect of cough has not been studied yet. The purpose of this study is to investigate the aging effect on the perception of urge-to-cough in healthy individuals. Fourteen young, female, healthy never-smokers were recruited via public postings. Twelve elderly female healthy never-smokers were recruited from a nursing home residence. The cough reflex threshold and the urge-to-cough were evaluated by inhalation of citric acid. The cough reflex sensitivities were defined as the lowest concentration of citric acid that elicited two or more coughs (C2) and five or more coughs (C5). The urge-to-cough was evaluated using a modified the Borg scale. There was no significant difference in the cough reflex threshold to citric acid between young and elderly subjects. The urge-to-cough scores at the concentration of C2 and C5 were significantly smaller in the elderly than young subjects. The urge-to-cough log-log slope in elderly subjects (0.73 ± 0.71 point · L/g) was significantly gentler than those of young subjects (1.35 ± 0.53 point · L/g, p < 0.01). There were no significant differences in the urge-to-cough threshold estimated between young and elderly subjects. The cough reflex threshold did not differ between young and elderly subjects whereas cognition of urge-to-cough was significantly decreased in elderly subjects in female never-smokers. Objective monitoring of cough might be important in the elderly people.

  11. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  12. Big brown bats (Eptesicus fuscus) maintain hearing sensitivity after exposure to intense band-limited noise.

    PubMed

    Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A

    2017-03-01

    Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.

  13. A decision model to estimate a risk threshold for venous thromboembolism prophylaxis in hospitalized medical patients.

    PubMed

    Le, P; Martinez, K A; Pappas, M A; Rothberg, M B

    2017-06-01

    Essentials Low risk patients don't require venous thromboembolism (VTE) prophylaxis; low risk is unquantified. We used a Markov model to estimate the risk threshold for VTE prophylaxis in medical inpatients. Prophylaxis was cost-effective for an average medical patient with a VTE risk of ≥ 1.0%. VTE prophylaxis can be personalized based on patient risk and age/life expectancy. Background Venous thromboembolism (VTE) is a common preventable condition in medical inpatients. Thromboprophylaxis is recommended for inpatients who are not at low risk of VTE, but no specific risk threshold for prophylaxis has been defined. Objective To determine a threshold for prophylaxis based on risk of VTE. Patients/Methods We constructed a decision model with a decision-tree following patients for 3 months after hospitalization, and a lifetime Markov model with 3-month cycles. The model tracked symptomatic deep vein thromboses and pulmonary emboli, bleeding events and heparin-induced thrombocytopenia. Long-term complications included recurrent VTE, post-thrombotic syndrome and pulmonary hypertension. For the base case, we considered medical inpatients aged 66 years, having a life expectancy of 13.5 years, VTE risk of 1.4% and bleeding risk of 2.7%. Patients received enoxaparin 40 mg day -1 for prophylaxis. Results Assuming a willingness-to-pay (WTP) threshold of $100 000/ quality-adjusted life year (QALY), prophylaxis was indicated for an average medical inpatient with a VTE risk of ≥ 1.0% up to 3 months after hospitalization. For the average patient, prophylaxis was not indicated when the bleeding risk was > 8.1%, the patient's age was > 73.4 years or the cost of enoxaparin exceeded $60/dose. If VTE risk was < 0.26% or bleeding risk was > 19%, the risks of prophylaxis outweighed benefits. The prophylaxis threshold was relatively insensitive to low-molecular-weight heparin cost and bleeding risk, but very sensitive to patient age and life expectancy. Conclusions The decision to offer prophylaxis should be personalized based on patient VTE risk, age and life expectancy. At a WTP of $100 000/QALY, prophylaxis is not warranted for most patients with a 3-month VTE risk below 1.0%. © 2017 International Society on Thrombosis and Haemostasis.

  14. Accelerometer thresholds: Accounting for body mass reduces discrepancies between measures of physical activity for individuals with overweight and obesity.

    PubMed

    Raiber, Lilian; Christensen, Rebecca A G; Jamnik, Veronica K; Kuk, Jennifer L

    2017-01-01

    The objective of this study was to explore whether accelerometer thresholds that are adjusted to account for differences in body mass influence discrepancies between self-report and accelerometer-measured physical activity (PA) volume for individuals with overweight and obesity. We analyzed 6164 adults from the National Health and Nutrition Examination Survey between 2003-2006. Established accelerometer thresholds were adjusted to account for differences in body mass to produce a similar energy expenditure (EE) rate as individuals with normal weight. Moderate-, vigorous-, and moderate- to vigorous-intensity PA (MVPA) durations were measured using established and adjusted accelerometer thresholds and compared with self-report. Durations of self-report were longer than accelerometer-measured MVPA using established thresholds (normal weight: 57.8 ± 2.4 vs 9.0 ± 0.5 min/day, overweight: 56.1 ± 2.7 vs 7.4 ± 0.5 min/day, and obesity: 46.5 ± 2.2 vs 3.7 ± 0.3 min/day). Durations of subjective and objective PA were negatively associated with body mass index (BMI) (P < 0.05). Using adjusted thresholds increased MVPA durations, and reduced discrepancies between accelerometer and self-report measures for overweight and obese groups by 6.0 ± 0.3 min/day and 17.7 ± 0.8 min/day, respectively (P < 0.05). Using accelerometer thresholds that represent equal EE rates across BMI categories reduced the discrepancies between durations of subjective and objective PA for overweight and obese groups. However, accelerometer-measured PA generally remained shorter than durations of self-report within all BMI categories. Further research may be necessary to improve analytical approaches when using objective measures of PA for individuals with overweight or obesity.

  15. Effects of self-generated noise on estimates of detection threshold in quiet for school-age children and adults

    PubMed Central

    Buss, Emily; Porter, Heather L.; Leibold, Lori J.; Grose, John H.; Hall, Joseph W.

    2016-01-01

    Objectives Detection thresholds in quiet become adult-like earlier in childhood for high than low frequencies. When adults listen for sounds near threshold, they tend to engage in behaviors that reduce physiologic noise (e.g., quiet breathing), which is predominantly low frequency. Children may not suppress self-generated noise to the same extent as adults, such that low-frequency self-generated noise elevates thresholds in the associated frequency regions. This possibility was evaluated by measuring noise levels in the ear canal simultaneous with adaptive threshold estimation. Design Listeners were normal-hearing children (4.3-16.0 yrs) and adults. Detection thresholds were measured adaptively for 250-, 1000- and 4000-Hz pure tones using a three-alternative forced-choice procedure. Recordings of noise in the ear canal were made while the listeners performed this task, with the earphone and microphone routed through a single foam insert. Levels of self-generated noise were computed in octave-wide bands. Age effects were evaluated for four groups: 4- to 6-year-olds, 7- to 10-year-olds, 11- to 16-year-olds, and adults. Results Consistent with previous data, the effect of child age on thresholds was robust at 250 Hz and fell off at higher frequencies; thresholds of even the youngest listeners were similar to adults’ at 4000 Hz. Self-generated noise had a similar low-pass spectral shape for all age groups, although the magnitude of self-generated noise was higher in younger listeners. If self-generated noise impairs detection, then noise levels should be higher for trials associated with the wrong answer than the right answer. This association was observed for all listener groups at the 250-Hz signal frequency. For adults and older children, this association was limited to the noise band centered on the 250-Hz signal. For the two younger groups of children, this association was strongest at the signal frequency, but extended to bands spectrally remote from the 250-Hz signal. For the 1000-Hz signal frequency, there was a broadly tuned association between noise and response only for the two younger groups of children. For the 4000-Hz signal frequency, only the youngest group of children demonstrated an association between responses and noise levels, and this association was particularly pronounced for bands below the signal frequency. Conclusions These results provide evidence that self-generated noise plays a role in the prolonged development of low-frequency detection thresholds in quiet. Some aspects of the results are consistent with the possibility that self-generated noise elevates thresholds via energetic masking, particularly at 250 Hz. The association between behavioral responses and noise spectrally remote from the signal frequency is also consistent with the idea that self-generated noise may also reflect contributions of more central factors (e.g., inattention to the task). Evaluation of self-generated noise could improve diagnosis of minimal or mild hearing loss. PMID:27438873

  16. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  17. Comparison of algorithms of testing for use in automated evaluation of sensation.

    PubMed

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  18. The Impact of Clinical History on the Threshold Estimation of Auditory Brainstem Response Results for Infants

    ERIC Educational Resources Information Center

    Zaitoun, Maha; Cumming, Steven; Purcell, Alison; O'Brien, Katie

    2017-01-01

    Purpose: This study assesses the impact of patient clinical history on audiologists' performance when interpreting auditory brainstem response (ABR) results. Method: Fourteen audiologists' accuracy in estimating hearing threshold for 16 infants through interpretation of ABR traces was compared on 2 occasions at least 5 months apart. On the 1st…

  19. 75 FR 42835 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... estimated cost of the case exceeds the adjusted outlier threshold. We calculate the adjusted outlier... to 80 percent of the difference between the estimated cost of the case and the outlier threshold. In... Federal Prospective Payment Rates VI. Update to Payments for High-Cost Outliers under the IRF PPS A...

  20. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    USDA-ARS?s Scientific Manuscript database

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  1. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  2. Determination of Foraging Thresholds and Effects of Application on Energetic Carrying Capacity for Waterfowl

    PubMed Central

    2015-01-01

    Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and suggest scientists reevaluate assumptions of these models used to guide habitat conservation. PMID:25790255

  3. Intensity level for exercise training in fibromyalgia by using mathematical models.

    PubMed

    Lemos, Maria Carolina D; Valim, Valéria; Zandonade, Eliana; Natour, Jamil

    2010-03-22

    It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 x age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients.

  4. Intensity level for exercise training in fibromyalgia by using mathematical models

    PubMed Central

    2010-01-01

    Background It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. Methods A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 × age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Results Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Conclusion Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients. PMID:20307323

  5. Identification of a Hemolysis Threshold That Increases Plasma and Serum Zinc Concentration.

    PubMed

    Killilea, David W; Rohner, Fabian; Ghosh, Shibani; Otoo, Gloria E; Smith, Lauren; Siekmann, Jonathan H; King, Janet C

    2017-06-01

    Background: Plasma or serum zinc concentration (PZC or SZC) is the primary measure of zinc status, but accurate sampling requires controlling for hemolysis to prevent leakage of zinc from erythrocytes. It is not established how much hemolysis can occur without changing PZC/SZC concentrations. Objective: This study determines a guideline for the level of hemolysis that can significantly elevate PZC/SZC. Methods: The effect of hemolysis on PZC/SZC was estimated by using standard hematologic variables and mineral content. The calculated hemolysis threshold was then compared with results from an in vitro study and a population survey. Hemolysis was assessed by hemoglobin and iron concentrations, direct spectrophotometry, and visual assessment of the plasma or serum. Zinc and iron concentrations were determined by inductively coupled plasma spectrometry. Results: A 5% increase in PZC/SZC was calculated to result from the lysis of 1.15% of the erythrocytes in whole blood, corresponding to ∼1 g hemoglobin/L added into the plasma or serum. Similarly, the addition of simulated hemolysate to control plasma in vitro caused a 5% increase in PZC when hemoglobin concentrations reached 1.18 ± 0.10 g/L. In addition, serum samples from a population nutritional survey were scored for hemolysis and analyzed for changes in SZC; samples with hemolysis in the range of 1-2.5 g hemoglobin/L showed an estimated increase in SZC of 6% compared with nonhemolyzed samples. Each approach indicated that a 5% increase in PZC/SZC occurs at ∼1 g hemoglobin/L in plasma or serum. This concentration of hemoglobin can be readily identified directly by chemical hemoglobin assays or indirectly by direct spectrophotometry or matching to a color scale. Conclusions: A threshold of 1 g hemoglobin/L is recommended for PZC/SZC measurements to avoid increases in zinc caused by hemolysis. The use of this threshold may improve zinc assessment for monitoring zinc status and nutritional interventions.

  6. Identification of a Hemolysis Threshold That Increases Plasma and Serum Zinc Concentration123

    PubMed Central

    Otoo, Gloria E; Smith, Lauren; Siekmann, Jonathan H

    2017-01-01

    Background: Plasma or serum zinc concentration (PZC or SZC) is the primary measure of zinc status, but accurate sampling requires controlling for hemolysis to prevent leakage of zinc from erythrocytes. It is not established how much hemolysis can occur without changing PZC/SZC concentrations. Objective: This study determines a guideline for the level of hemolysis that can significantly elevate PZC/SZC. Methods: The effect of hemolysis on PZC/SZC was estimated by using standard hematologic variables and mineral content. The calculated hemolysis threshold was then compared with results from an in vitro study and a population survey. Hemolysis was assessed by hemoglobin and iron concentrations, direct spectrophotometry, and visual assessment of the plasma or serum. Zinc and iron concentrations were determined by inductively coupled plasma spectrometry. Results: A 5% increase in PZC/SZC was calculated to result from the lysis of 1.15% of the erythrocytes in whole blood, corresponding to ∼1 g hemoglobin/L added into the plasma or serum. Similarly, the addition of simulated hemolysate to control plasma in vitro caused a 5% increase in PZC when hemoglobin concentrations reached 1.18 ± 0.10 g/L. In addition, serum samples from a population nutritional survey were scored for hemolysis and analyzed for changes in SZC; samples with hemolysis in the range of 1–2.5 g hemoglobin/L showed an estimated increase in SZC of 6% compared with nonhemolyzed samples. Each approach indicated that a 5% increase in PZC/SZC occurs at ∼1 g hemoglobin/L in plasma or serum. This concentration of hemoglobin can be readily identified directly by chemical hemoglobin assays or indirectly by direct spectrophotometry or matching to a color scale. Conclusions: A threshold of 1 g hemoglobin/L is recommended for PZC/SZC measurements to avoid increases in zinc caused by hemolysis. The use of this threshold may improve zinc assessment for monitoring zinc status and nutritional interventions. PMID:28490675

  7. Assessing the nutrient intake of a low-carbohydrate, high-fat (LCHF) diet: a hypothetical case study design

    PubMed Central

    Zinn, Caryn; Rush, Amy; Johnson, Rebecca

    2018-01-01

    Objective The low-carbohydrate, high-fat (LCHF) diet is becoming increasingly employed in clinical dietetic practice as a means to manage many health-related conditions. Yet, it continues to remain contentious in nutrition circles due to a belief that the diet is devoid of nutrients and concern around its saturated fat content. This work aimed to assess the micronutrient intake of the LCHF diet under two conditions of saturated fat thresholds. Design In this descriptive study, two LCHF meal plans were designed for two hypothetical cases representing the average Australian male and female weight-stable adult. National documented heights, a body mass index of 22.5 to establish weight and a 1.6 activity factor were used to estimate total energy intake using the Schofield equation. Carbohydrate was limited to <130 g, protein was set at 15%–25% of total energy and fat supplied the remaining calories. One version of the diet aligned with the national saturated fat guideline threshold of <10% of total energy and the other included saturated fat ad libitum. Primary outcomes The primary outcomes included all micronutrients, which were assessed using FoodWorks dietary analysis software against national Australian/New Zealand nutrient reference value (NRV) thresholds. Results All of the meal plans exceeded the minimum NRV thresholds, apart from iron in the female meal plans, which achieved 86%–98% of the threshold. Saturated fat intake was logistically unable to be reduced below the 10% threshold for the male plan but exceeded the threshold by 2 g (0.6%). Conclusion Despite macronutrient proportions not aligning with current national dietary guidelines, a well-planned LCHF meal plan can be considered micronutrient replete. This is an important finding for health professionals, consumers and critics of LCHF nutrition, as it dispels the myth that these diets are suboptimal in their micronutrient supply. As with any diet, for optimal nutrient achievement, meals need to be well formulated. PMID:29439004

  8. Gap Detection and Temporal Modulation Transfer Function as Behavioral Estimates of Auditory Temporal Acuity Using Band-Limited Stimuli in Young and Older Adults

    PubMed Central

    Shen, Yi

    2015-01-01

    Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722

  9. Gas composition sensing using carbon nanotube arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor)

    2008-01-01

    A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.

  10. Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses

    PubMed Central

    Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali

    2017-01-01

    Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880

  11. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  12. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  13. Robust w-Estimators for Cryo-EM Class Means.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2016-02-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.

  14. Auditory steady state response in sound field.

    PubMed

    Hernández-Pérez, H; Torres-Fortuny, A

    2013-02-01

    Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.

  15. Cadence (steps/min) and intensity during ambulation in 6-20 year olds: the CADENCE-kids study.

    PubMed

    Tudor-Locke, Catrine; Schuna, John M; Han, Ho; Aguiar, Elroy J; Larrivee, Sandra; Hsia, Daniel S; Ducharme, Scott W; Barreira, Tiago V; Johnson, William D

    2018-02-26

    Steps/day is widely utilized to estimate the total volume of ambulatory activity, but it does not directly reflect intensity, a central tenet of public health guidelines. Cadence (steps/min) represents an overlooked opportunity to describe the intensity of ambulatory activity. We sought to establish thresholds linking directly observed cadence with objectively measured intensity in 6-20 year olds. One hundred twenty participants completed multiple 5-min bouts on a treadmill, from 13.4 m/min (0.80 km/h) to 134.0 m/min (8.04 km/h). The protocol was terminated when participants naturally transitioned to running, or if they chose to not continue. Steps were visually counted and intensity was objectively measured using a portable metabolic system. Youth metabolic equivalents (METy) were calculated for 6-17 year olds, with moderate intensity defined as ≥4 and < 6 METy, and vigorous intensity as ≥6 METy. Traditional METs were calculated for 18-20 year olds, with moderate intensity defined as ≥3 and < 6 METs, and vigorous intensity defined as ≥6 METs. Optimal cadence thresholds for moderate and vigorous intensity were identified using segmented random coefficients models and receiver operating characteristic (ROC) curves. Participants were on average (± SD) aged 13.1 ± 4.3 years, weighed 55.8 ± 22.3 kg, and had a BMI z-score of 0.58 ± 1.21. Moderate intensity thresholds (from regression and ROC analyses) ranged from 128.4 steps/min among 6-8 year olds to 87.3 steps/min among 18-20 year olds. Comparable values for vigorous intensity ranged from 157.7 steps/min among 6-8 year olds to 119.3 steps/min among 18-20 year olds. Considering both regression and ROC approaches, heuristic cadence thresholds (i.e., evidence-based, practical, rounded) ranged from 125 to 90 steps/min for moderate intensity, and 155 to 125 steps/min for vigorous intensity, with higher cadences for younger age groups. Sensitivities and specificities for these heuristic thresholds ranged from 77.8 to 99.0%, indicating fair to excellent classification accuracy. These heuristic cadence thresholds may be used to prescribe physical activity intensity in public health recommendations. In the research and clinical context, these heuristic cadence thresholds have apparent value for accelerometer-based analytical approaches to determine the intensity of ambulatory activity.

  16. Psychophysical measurements in children: challenges, pitfalls, and considerations.

    PubMed

    Witton, Caroline; Talcott, Joel B; Henning, G Bruce

    2017-01-01

    Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.

  17. An Objective Rationale for the Choice of Regularisation Parameter with Application to Global Multiple-Frequency S-Wave Tomography

    NASA Astrophysics Data System (ADS)

    Zaroli, C.; Sambridge, M.; Leveque, J. J.; Debayle, E.; Nolet, G.

    2014-12-01

    In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global Multiple-Frequency Tomography, using a data set of 287078 S-wave delay-times measured in five frequency bands (10, 15, 22, 34, 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise.In addition, a sharp change of behaviour of the model infinity-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation.Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models. The obtained tomographic model is displayed in mid lower-mantle (660-1910 km depth), and is shown to be mostly compatible with three other recent global shear-velocity models, while significant differences can be noticed. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of the Earth's mantle, using as much as possible of the relevant structural information in the data. This work was recently published: Zaroli, C., Sambridge, M., Lévêque, J.-J., Debayle, E., and Nolet, G. (2013) - Solid Earth, 4, 357-371, doi:10.5194/se-4-357-2013

  18. Milder form of heat-related symptoms and thermal sensation: a study in a Mediterranean climate

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina G.; Lykoudis, Spyridon P.; Nikolopoulos, Georgios K.

    2016-06-01

    Mild heat-related health effects and their potential association with meteorological and personal parameters in relation to subjective and objective thermal sensation were investigated. Micrometeorological measurements and questionnaire surveys were conducted in an urban Mediterranean environment during a warm, cool, and a transitional season. The participants were asked to indicate their thermal sensation based on a seven-point scale and report whether they were experiencing any of the following symptoms: headache, dizziness, breathing difficulties, and exhaustion. Two thermal indices, Actual Sensation Vote (ASV) and Universal Thermal Climate Index (UTCI), were estimated in order to obtain an objective measure of individuals' thermal sensation. Binary logistic regression was applied to identify risk parameters while cluster analysis was used to determine thresholds of air temperature, ASV and UTCI related to health effects. Exhaustion was the most frequent symptom reported by the interviewees. Females and smokers were more likely to report heat-related symptoms than males and nonsmokers. Based on cluster analysis, 35 °C could be a cutoff point for the manifestation of heat-related symptoms during summer. The threshold for ASV was 0.85 corresponding to "warm" thermal sensation and for UTCI was about 30.85 °C corresponding to "moderate heat stress" according to the Mediterranean assessment scale.

  19. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  20. Development of a Novel, Objective Measure of Health Care–Related Financial Burden for U.S. Families with Children

    PubMed Central

    Wisk, Lauren E; Gangnon, Ronald; Vanness, David J; Galbraith, Alison A; Mullahy, John; Witt, Whitney P

    2014-01-01

    Objective To develop and validate a theoretically based and empirically driven objective measure of financial burden for U.S. families with children. Data Sources The measure was developed using 149,021 families with children from the National Health Interview Survey, and it was validated using 18,488 families with children from the Medical Expenditure Panel Survey. Study Design We estimated the marginal probability of unmet health care need due to cost using a bivariate tensor product spline for family income and out-of-pocket health care costs (OOPC; e.g., deductibles, copayments), while adjusting for confounders. Recursive partitioning was performed on these probabilities, as a function of income and OOPC, to establish thresholds demarcating levels of predicted risk. Principal Findings We successfully generated a novel measure of financial burden with four categories that were associated with unmet need (vs. low burden: midlow OR: 1.93, 95 percent CI: 1.78–2.09; midhigh OR: 2.78, 95 percent CI: 2.49–3.10; high OR: 4.38, 95 percent CI: 3.99–4.80). The novel burden measure demonstrated significantly better model fit and less underestimation of financial burden compared to an existing measure (OOPC/income ≥10 percent). Conclusion The newly developed measure of financial burden establishes thresholds based on different combinations of family income and OOPC that can be applied in future studies of health care utilization and expenditures and in policy development and evaluation. PMID:25328073

  1. Comparison of epicardial adipose tissue radiodensity threshold between contrast and non-contrast enhanced computed tomography scans: A cohort study of derivation and validation.

    PubMed

    Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig

    2018-05-11

    Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.

  2. Estimation of frequency offset in mobile satellite modems

    NASA Technical Reports Server (NTRS)

    Cowley, W. G.; Rice, M.; Mclean, A. N.

    1993-01-01

    In mobilesat applications, frequency offset on the received signal must be estimated and removed prior to further modem processing. A straightforward method of estimating the carrier frequency offset is to raise the received MPSK signal to the M-th power, and then estimate the location of the peak spectral component. An analysis of the lower signal to noise threshold of this method is carried out for BPSK signals. Predicted thresholds are compared to simulation results. It is shown how the method can be extended to pi/M MPSK signals. A real-time implementation of frequency offset estimation for the Australian mobile satellite system is described.

  3. Probability modeling of high flow extremes in Yingluoxia watershed, the upper reaches of Heihe River basin

    NASA Astrophysics Data System (ADS)

    Li, Zhanling; Li, Zhanjie; Li, Chengcheng

    2014-05-01

    Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.

  4. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  5. Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene G (Inventor)

    2011-01-01

    An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.

  6. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.

    PubMed

    Green, Adam W; Bailey, Larissa L

    2015-01-01

    Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA) using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica) metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools) using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA) and available information to inform a formal decision process to determine optimal and timely management policies.

  7. Secondary antiproton production in relativistic plasmas

    NASA Technical Reports Server (NTRS)

    Dermer, C. D.; Ramaty, R.

    1985-01-01

    The possibility is investigated that the reported excess low energy antiproton component of the cosmic radiation results from proton-proton (p-p) interactions in relativistic plasmas. Because of both target and projectile motion in such plasmas, the antiproton production threshold in the frame of the plasma is much lower than the threshold of antiproton production in cosmic ray interactions with ambient matter. The spectrum of the resultant antiprotons therefore extends to much lower energy than in the cosmic ray case. The antiproton spectrum is calculated for relativistic thermal plasmas and the spectrum is estimated for relativistic nonthermal plasmas. As possible production sites, matter accreting onto compact objects located in the galaxy is considered. Possible overproduction of gamma rays from associated neutral pion production can be avoided if the site is optically thick to the photons but not to the antiprotons. A possible scenario involves a sufficiently large photon density that the neutral pion gamma rays are absorbed by photon-photon pair production. Escape of the antiprotons to the interstellar medium can be mediated by antineutron production.

  8. Using instrumental (CIE and reflectance) measures to predict consumers' acceptance of beef colour.

    PubMed

    Holman, Benjamin W B; van de Ven, Remy J; Mao, Yanwei; Coombs, Cassius E O; Hopkins, David L

    2017-05-01

    We aimed to establish colorimetric thresholds based upon the capacity for instrumental measures to predict consumer satisfaction with beef colour. A web-based survey was used to distribute standardised photographs of beef M. longissimus lumborum with known colorimetrics (L*, a*, b*, hue, chroma, ratio of reflectance at 630nm and 580nm, and estimated deoxymyoglobin, oxymyoglobin and metmyoglobin concentrations) for scrutiny. Consumer demographics and perceived importance of colour to beef value were also evaluated. It was found that a* provided the most simple and robust prediction of beef colour acceptability. Beef colour was considered acceptable (with 95% acceptance) when a* values were equal to or above 14.5. Demographic effects on this threshold were negligible, but consumer nationality and gender did contribute to variation in the relative importance of colour to beef value. These results provide future beef colour studies with context to interpret objective colour measures in terms of consumer acceptance and market appeal. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  9. User guide for HCR Estimator 2.0: software to calculate cost and revenue thresholds for harvesting small-diameter ponderosa pine.

    Treesearch

    Dennis R. Becker; Debra Larson; Eini C. Lowell; Robert B. Rummer

    2008-01-01

    The HCR (Harvest Cost-Revenue) Estimator is engineering and financial analysis software used to evaluate stand-level financial thresholds for harvesting small-diameter ponderosa pine (Pinus ponderosa Dougl. ex Laws.) in the Southwest United States. The Windows-based program helps contractors and planners to identify costs associated with tree...

  10. Environmental Suitability of Vibrio Infections in a Warming Climate: An Early Warning System

    PubMed Central

    Trinanes, Joaquin; Lohr, Wolfgang; Sudre, Bertrand; Löfdahl, Margareta; Martinez-Urtaza, Jaime; Nichols, Gordon L.; Rocklöv, Joacim

    2017-01-01

    Background: Some Vibrio spp. are pathogenic and ubiquitous in marine waters with low to moderate salinity and thrive with elevated sea surface temperature (SST). Objectives: Our objective was to monitor and project the suitability of marine conditions for Vibrio infections under climate change scenarios. Methods: The European Centre for Disease Prevention and Control (ECDC) developed a platform (the ECDC Vibrio Map Viewer) to monitor the environmental suitability of coastal waters for Vibrio spp. using remotely sensed SST and salinity. A case-crossover study of Swedish cases was conducted to ascertain the relationship between SST and Vibrio infection through a conditional logistic regression. Climate change projections for Vibrio infections were developed for Representative Concentration Pathway (RCP) 4.5 and RCP 8.5. Results: The ECDC Vibrio Map Viewer detected environmentally suitable areas for Vibrio spp. in the Baltic Sea in July 2014 that were accompanied by a spike in cases and one death in Sweden. The estimated exposure–response relationship for Vibrio infections at a threshold of 16°C revealed a relative risk (RR)=1.14 (95% CI: 1.02, 1.27; p=0.024) for a lag of 2 wk; the estimated risk increased successively beyond this SST threshold. Climate change projections for SST under the RCP 4.5 and RCP 8.5 scenarios indicate a marked upward trend during the summer months and an increase in the relative risk of these infections in the coming decades. Conclusions: This platform can serve as an early warning system as the risk of further Vibrio infections increases in the 21st century due to climate change. https://doi.org/10.1289/EHP2198 PMID:29017986

  11. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    PubMed

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  12. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  13. Rainfall Threshold for Flash Flood Early Warning Based on Rational Equation: A Case Study of Zuojiao Watershed in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.

    2017-12-01

    Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.

  14. Effects of acute hypoxia on the determination of anaerobic threshold using the heart rate-work rate relationships during incremental exercise tests.

    PubMed

    Ozcelik, O; Kelestimur, H

    2004-01-01

    Anaerobic threshold which describes the onset of systematic increase in blood lactate concentration is a widely used concept in clinical and sports medicine. A deflection point between heart rate-work rate has been introduced to determine the anaerobic threshold non-invasively. However, some researchers have consistently reported a heart rate deflection at higher work rates, while others have not. The present study was designed to investigate whether the heart rate deflection point accurately predicts the anaerobic threshold under the condition of acute hypoxia. Eight untrained males performed two incremental exercise tests using an electromagnetically braked cycle ergometer: one breathing room air and one breathing 12 % O2. The anaerobic threshold was estimated using the V-slope method and determined from the increase in blood lactate and the decrease in standard bicarbonate concentration. This threshold was also estimated by in the heart rate-work rate relationship. Not all subjects exhibited a heart rate deflection. Only two subjects in the control and four subjects in the hypoxia groups showed a heart rate deflection. Additionally, the heart rate deflection point overestimated the anaerobic threshold. In conclusion, the heart rate deflection point was not an accurate predictor of anaerobic threshold and acute hypoxia did not systematically affect the heart rate-work rate relationships.

  15. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  16. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  17. Subjective versus objective evening chronotypes in bipolar disorder.

    PubMed

    Gershon, Anda; Kaufmann, Christopher N; Depp, Colin A; Miller, Shefali; Do, Dennis; Zeitzer, Jamie M; Ketter, Terence A

    2018-01-01

    Disturbed sleep timing is common in bipolar disorder (BD). However, most research is based upon self-reports. We examined relationships between subjective versus objective assessments of sleep timing in BD patients versus controls. We studied 61 individuals with bipolar I or II disorder and 61 healthy controls. Structured clinical interviews assessed psychiatric diagnoses, and clinician-administered scales assessed current mood symptom severity. For subjective chronotype, we used the Composite Scale of Morningness (CSM) questionnaire, using original and modified (1, ¾, ⅔, and ½ SD below mean CSM score) thresholds to define evening chronotype. Objective chronotype was calculated as the percentage of nights (50%, 66.7%, 75%, or 90% of all nights) with sleep interval midpoints at or before (non-evening chronotype) vs. after (evening chronotype) 04:15:00 (4:15:00a.m.), based on 25-50 days of continuous actigraph data. BD participants and controls differed significantly with respect to CSM mean scores and CSM evening chronotypes using modified, but not original, thresholds. Groups also differed significantly with respect to chronotype based on sleep interval midpoint means, and based on the threshold of 75% of sleep intervals with midpoints after 04:15:00. Subjective and objective chronotypes correlated significantly with one another. Twenty-one consecutive intervals were needed to yield an evening chronotype classification match of ≥ 95% with that made using the 75% of sleep intervals threshold. Limited sample size/generalizability. Subjective and objective chronotype measurements were correlated with one another in participants with BD. Using population-specific thresholds, participants with BD had a later chronotype than controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Influence of Injury Risk Thresholds on the Performance of an Algorithm to Predict Crashes with Serious Injuries

    PubMed Central

    Bahouth, George; Digges, Kennerly; Schulman, Carl

    2012-01-01

    This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132

  19. Estimation of Second Primary Cancer Risk After Treatment with Radioactive Iodine for Differentiated Thyroid Carcinoma.

    PubMed

    Corrêa, Nilton Lavatori; de Sá, Lidia Vasconcellos; de Mello, Rossana Corbo Ramalho

    2017-02-01

    An increase in the incidence of second primary cancers is the late effect of greatest concern that could occur in differentiated thyroid carcinoma (DTC) patients treated with radioactive iodine (RAI). The decision to treat a patient with RAI should therefore incorporate a careful risk-benefit analysis. The objective of this work was to adapt the risk-estimation models developed by the Biological Effects of Ionizing Radiation Committee to local epidemiological characteristics in order to assess the carcinogenesis risk from radiation in a population of Brazilian DTC patients treated with RAI. Absorbed radiation doses in critical organs were also estimated to determine whether they exceeded the thresholds for deterministic effects. A total of 416 DTC patients treated with RAI were retrospectively studied. Four organs were selected for absorbed dose estimation and subsequent calculation of carcinogenic risk: the kidney, stomach, salivary glands, and bone marrow. Absorbed doses were calculated by dose factors (absorbed dose per unit activity administered) previously established and based on standard human models. The lifetime attributable risk (LAR) of incidence of cancer as a function of age, sex, and organ-specific dose was estimated, relating it to the activity of RAI administered in the initial treatment. The salivary glands received the greatest absorbed doses of radiation, followed by the stomach, kidney, and bone marrow. None of these, however, surpassed the threshold for deterministic effects for a single administration of RAI. Younger patients received the same level of absorbed dose in the critical organs as older patients did. The lifetime attributable risk for stomach cancer incidence was by far the highest, followed in descending order by salivary-gland cancer, leukemia, and kidney cancer. RAI in a single administration is safe in terms of deterministic effects because even high-administered activities do not result in absorbed doses that exceed the thresholds for significant tissue reactions. The Biological Effects of Ionizing Radiation Committee mathematical models are a practical method of quantifying the risks of a second primary cancer, demonstrating a marked decrease in risk for younger patients with the administration of lower RAI activities and suggesting that only the smallest activities necessary to promote an effective ablation should be administered in low-risk DTC patients.

  20. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    PubMed

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  2. Comparison of a field-based test to estimate functional threshold power and power output at lactate threshold.

    PubMed

    Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C

    2012-02-01

    It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).

  3. Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers

    PubMed Central

    Situ, Ping; Simpson, Trefford; Begley, Carolyn

    2016-01-01

    Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090

  4. Digital camera auto white balance based on color temperature estimation clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong

    2010-11-01

    Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.

  5. INTEGRATED AND FIBER OPTICS: Threshold of photoinduced conversion of the polarization of radiation in lithium niobate optical waveguides

    NASA Astrophysics Data System (ADS)

    Kazanskiĭ, P. G.

    1989-02-01

    A threshold of photoinduced conversion of an ordinary wave into an extraordinary one was discovered for lithium niobate optical waveguides. The threshold intensity of the radiation was determined for waveguides prepared under different conditions. The experimental results were compared with theoretical estimates.

  6. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    PubMed

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  7. Seeing and Hearing a Word: Combining Eye and Ear Is More Efficient than Combining the Parts of a Word

    PubMed Central

    Dubois, Matthieu; Poeppel, David; Pelli, Denis G.

    2013-01-01

    To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word’s image and sound, and combine them efficiently. The brain’s machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation. PMID:23734220

  8. Seeing and hearing a word: combining eye and ear is more efficient than combining the parts of a word.

    PubMed

    Dubois, Matthieu; Poeppel, David; Pelli, Denis G

    2013-01-01

    To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word's image and sound, and combine them efficiently. The brain's machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation.

  9. [The hearing function and vegetative reactions in airport technicians using individual hearing protectors].

    PubMed

    Chistov, S D; Soldatov, S K; Zinkin, V N; Poliakov, N M

    2013-01-01

    The objective of the present study was to evaluate the hearing function in the airport technical personnel and estimate the effectiveness of multicomponent anti-noise hearing protectors used by the specialists engaged in the aircraft maintenance. The tonal threshold audiometry was carried out before and after a shiftwork. The extra-aural effect of noise was assessed from the characteristics of cardiac rhythm variability. The study included two groups of subjects: in one of them (n=8) they used ordinary flight headsets (control) in the other the protection was ensured with the help of multi-insert hearing protectors (n=16). The initial hearing thresholds were found to be increased up to 70 and 60 dB at the frequencies of 4 and 8 kHz respectively. The regression analysis revealed the relationship between these parameters and the duration of aerodrome work experience. Temporary threshold shifts were observed only in the control group. An increase in the tone of the sympathetic nervous system was observed in the control subjects but was absent in the study group. It is concluded that the multi-component hearing protectors employed in the present study are highly efficacious anti-noise devices. The mechanisms of noise-induced hearing loss are discussed.

  10. Saltmarsh creek bank stability: Biostabilisation and consolidation with depth

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Thompson, C. E. L.; Collins, M. B.

    2012-03-01

    The stability of cohesive sediments of a saltmarsh in Southern England was measured in the field and the laboratory using a Cohesive Strength Meter (CSM) and a shear vane apparatus. Cores and sediment samples were collected from two tidal creek banks, covered by Atriplex portulacoides (Sea Purslane) and Juncus maritimus (Sea Rush), respectively. The objectives of the study were to examine the variation of sediment stability throughout banks with cantilevers present and investigate the influence of roots and downcore consolidation on bank stability. Data on erosion threshold and shear strength were interpreted with reference to bank depth, sediment properties and biological influences. The higher average erosion threshold was from the Sea Purslane bank whilst the Sea Rush bank showed higher average vane shear strength. The vertical variation in core sediment stability was mainly affected by roots and downcore consolidation with depth. The data obtained from the bank faces revealed that vertical variations in both erosion threshold and vane shear strength were affected primarily by roots and algae. A quantitative estimate of the relative contributions of roots and downcore consolidation to bank sediment stability was undertaken using the bank stability data and sediment density data. This showed that roots contributed more to the Sea Purslane bank stability than downcore consolidation, whilst downcore consolidation has more pronounced effects on the Sea Rush bank stability.

  11. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method

    PubMed Central

    Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.

    2016-01-01

    Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227

  12. The behavioral economics of drug self-administration: A review and new analytical approach for within-session procedures

    PubMed Central

    Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary

    2012-01-01

    Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021

  13. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  14. Accuracy of Mobile-Based Audiometry in the Evaluation of Hearing Loss in Quiet and Noisy Environments.

    PubMed

    Saliba, Joe; Al-Reefi, Mahmoud; Carriere, Junie S; Verma, Neil; Provencal, Christiane; Rappaport, Jamie M

    2017-04-01

    Objectives (1) To compare the accuracy of 2 previously validated mobile-based hearing tests in determining pure tone thresholds and screening for hearing loss. (2) To determine the accuracy of mobile audiometry in noisy environments through noise reduction strategies. Study Design Prospective clinical study. Setting Tertiary hospital. Subjects and Methods Thirty-three adults with or without hearing loss were tested (mean age, 49.7 years; women, 42.4%). Air conduction thresholds measured as pure tone average and at individual frequencies were assessed by conventional audiogram and by 2 audiometric applications (consumer and professional) on a tablet device. Mobile audiometry was performed in a quiet sound booth and in a noisy sound booth (50 dB of background noise) through active and passive noise reduction strategies. Results On average, 91.1% (95% confidence interval [95% CI], 89.1%-93.2%) and 95.8% (95% CI, 93.5%-97.1%) of the threshold values obtained in a quiet sound booth with the consumer and professional applications, respectively, were within 10 dB of the corresponding audiogram thresholds, as compared with 86.5% (95% CI, 82.6%-88.5%) and 91.3% (95% CI, 88.5%-92.8%) in a noisy sound booth through noise cancellation. When screening for at least moderate hearing loss (pure tone average >40 dB HL), the consumer application showed a sensitivity and specificity of 87.5% and 95.9%, respectively, and the professional application, 100% and 95.9%. Overall, patients preferred mobile audiometry over conventional audiograms. Conclusion Mobile audiometry can correctly estimate pure tone thresholds and screen for moderate hearing loss. Noise reduction strategies in mobile audiometry provide a portable effective solution for hearing assessments outside clinical settings.

  15. Comparison of automatic procedures in the selection of peaks over threshold in flood frequency analysis: A Canadian case study in the context of climate change

    NASA Astrophysics Data System (ADS)

    Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.

    2017-12-01

    Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.

  16. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    PubMed

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Impact of heat stress on conception rate of dairy cows in the moderate climate considering different temperature-humidity index thresholds, periods relative to breeding, and heat load indices.

    PubMed

    Schüller, L K; Burfeind, O; Heuwieser, W

    2014-05-01

    The objectives of this retrospective study were to investigate the relationship between temperature-humidity index (THI) and conception rate (CR) of lactating dairy cows, to estimate a threshold for this relationship, and to identify periods of exposure to heat stress relative to breeding in an area of moderate climate. In addition, we compared three different heat load indices related to CR: mean THI, maximum THI, and number of hours above the mean THI threshold. The THI threshold for the influence of heat stress on CR was 73. It was statistically chosen based on the observed relationship between the mean THI at the day of breeding and the resulting CR. Negative effects of heat stress, however, were already apparent at lower levels of THI, and 1 hour of mean THI of 73 or more decreased the CR significantly. The CR of lactating dairy cows was negatively affected by heat stress both before and after the day of breeding. The greatest negative impact of heat stress on CR was observed 21 to 1 day before breeding. When the mean THI was 73 or more in this period, CR decreased from 31% to 12%. Compared with the average maximum THI and the total number of hours above a threshold of more than or 9 hours, the mean THI was the most sensitive heat load index relating to CR. These results indicate that the CR of dairy cows raised in the moderate climates is highly affected by heat stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing

    NASA Astrophysics Data System (ADS)

    Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.

    2015-12-01

    One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.

  19. A Systematic Review of Studies Eliciting Willingness-to-Pay per Quality-Adjusted Life Year: Does It Justify CE Threshold?

    PubMed Central

    Nimdet, Khachapon; Chaiyakunapruk, Nathorn; Vichansavakul, Kittaya; Ngorsuraches, Surachat

    2015-01-01

    Background A number of studies have been conducted to estimate willingness to pay (WTP) per quality-adjusted life years (QALY) in patients or general population for various diseases. However, there has not been any systematic review summarizing the relationship between WTP per QALY and cost-effectiveness (CE) threshold based on World Health Organization (WHO) recommendation. Objective To systematically review willingness-to-pay per quality-adjusted-life-year (WTP per QALY) literature, to compare WTP per QALY with Cost-effectiveness (CE) threshold recommended by WHO, and to determine potential influencing factors. Methods We searched MEDLINE, EMBASE, Psyinfo, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Center of Research Dissemination (CRD), and EconLit from inception through 15 July 2014. To be included, studies have to estimate WTP per QALY in health-related issues using stated preference method. Two investigators independently reviewed each abstract, completed full-text reviews, and extracted information for included studies. We compared WTP per QALY to GDP per capita, analyzed, and summarized potential influencing factors. Results Out of 3,914 articles founded, 14 studies were included. Most studies (92.85%) used contingent valuation method, while only one study used discrete choice experiments. Sample size varied from 104 to 21,896 persons. The ratio between WTP per QALY and GDP per capita varied widely from 0.05 to 5.40, depending on scenario outcomes (e.g., whether it extended/saved life or improved quality of life), severity of hypothetical scenarios, duration of scenario, and source of funding. The average ratio of WTP per QALY and GDP per capita for extending life or saving life (2.03) was significantly higher than the average for improving quality of life (0.59) with the mean difference of 1.43 (95% CI, 1.81 to 1.06). Conclusion This systematic review provides an overview summary of all studies estimating WTP per QALY studies. The variation of ratio of WTP per QALY and GDP per capita depended on several factors may prompt discussions on the CE threshold policy. Our research work provides a foundation for defining future direction of decision criteria for an evidence-informed decision making system. PMID:25855971

  20. Application of threshold concepts to ecological management problems: occupancy of Golden Eagles in Denali National Park, Alaska: Chapter 5

    USGS Publications Warehouse

    Eaton, Mitchell J.; Martin, Julien; Nichols, James D.; McIntyre, Carol; McCluskie, Maggie C.; Schmutz, Joel A.; Lubow, Bruce L.; Runge, Michael C.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    In this chapter, we demonstrate the application of the various classes of thresholds, detailed in earlier chapters and elsewhere, via an actual but simplified natural resource management case study. We intend our example to provide the reader with the ability to recognize and apply the theoretical concepts of utility, ecological and decision thresholds to management problems through a formalized decision-analytic process. Our case study concerns the management of human recreational activities in Alaska’s Denali National Park, USA, and the possible impacts of such activities on nesting Golden Eagles, Aquila chrysaetos. Managers desire to allow visitors the greatest amount of access to park lands, provided that eagle nesting-site occupancy is maintained at a level determined to be acceptable by the managers themselves. As these two management objectives are potentially at odds, we treat minimum desired occupancy level as a utility threshold which, then, serves to guide the selection of annual management alternatives in the decision process. As human disturbance is not the only factor influencing eagle occupancy, we model nesting-site dynamics as a function of both disturbance and prey availability. We incorporate uncertainty in these dynamics by considering several hypotheses, including a hypothesis that site occupancy is affected only at a threshold level of prey abundance (i.e., an ecological threshold effect). By considering competing management objectives and accounting for two forms of thresholds in the decision process, we are able to determine the optimal number of annual nesting-site restrictions that will produce the greatest long-term benefits for both eagles and humans. Setting a utility threshold of 75 occupied sites, out of a total of 90 potential nesting sites, the optimization specified a decision threshold at approximately 80 occupied sites. At the point that current occupancy falls below 80 sites, the recommended decision is to begin restricting access to humans; above this level, it is recommended that all eagle territories be opened to human recreation. We evaluated the sensitivity of the decision threshold to uncertainty in system dynamics and to management objectives (i.e., to the utility threshold).

  1. Comparison of the diagnostic accuracy, sensitivity and specificity of four odontological methods for age evaluation in Italian children at the age threshold of 14 years using ROC curves.

    PubMed

    Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide

    2016-01-01

    The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.

  2. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    PubMed

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  3. Reliability of the method of levels for determining cutaneous temperature sensitivity

    NASA Astrophysics Data System (ADS)

    Jakovljević, Miroljub; Mekjavić, Igor B.

    2012-09-01

    Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.

  4. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  5. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  6. Implications of lower risk thresholds for statin treatment in primary prevention: analysis of CPRD and simulation modelling of annual cholesterol monitoring.

    PubMed

    McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael

    2015-01-01

    To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.

  7. Predicting Geriatric Falls Following an Episode of Emergency Department Care: A Systematic Review

    PubMed Central

    Carpenter, Christopher R.; Avidan, Michael S.; Wildes, Tanya; Stark, Susan; Fowler, Susan A.; Lo, Alexander X.

    2015-01-01

    Background Falls are the leading cause of traumatic mortality in geriatric adults. Despite recent multispecialty guideline recommendations that advocate for proactive fall prevention protocols in the emergency department (ED), the ability of risk factors or risk stratification instruments to identify subsets of geriatric patients at increased risk for short-term falls is largely unexplored. Objectives This was a systematic review and meta-analysis of ED-based history, physical examination, and fall risk stratification instruments with the primary objective of providing a quantitative estimate for each risk factor’s accuracy to predict future falls. A secondary objective was to quantify ED fall risk assessment test and treatment thresholds using derived estimates of sensitivity and specificity. Methods A medical librarian and two emergency physicians (EPs) conducted a medical literature search of PUBMED, EMBASE, CINAHL, CENTRAL, DARE, the Cochrane Registry, and Clinical Trials. Unpublished research was located by a hand search of emergency medicine (EM) research abstracts from national meetings. Inclusion criteria for original studies included ED-based assessment of pre-ED or post-ED fall risk in patients 65 years and older with sufficient detail to reproduce contingency tables for meta-analysis. Original study authors were contacted for additional details when necessary. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2) was used to assess individual study quality for those studies that met inclusion criteria. When more than one qualitatively similar study assessed the same risk factor for falls at the same interval following an ED evaluation, then meta-analysis was performed using Meta-DiSc software. The primary outcomes were sensitivity, specificity, and likelihood ratios for fall risk factors or risk stratification instruments. Secondary outcomes included estimates of test and treatment thresholds using the Pauker method based on accuracy, screening risk, and the projected benefits or harms of fall prevention interventions in the ED. Results A total of 608 unique and potentially relevant studies were identified, but only three met our inclusion criteria. Two studies that included 660 patients assessed 29 risk factors and two risk stratification instruments for falls in geriatric patients in the 6 months following an ED evaluation, while one study of 107 patients assessed the risk of falls in the preceding 12 months. A self-report of depression was associated with the highest positive likelihood ratio (LR) of 6.55 (95% confidence interval [CI] = 1.41 to 30.48). Six fall predictors were identified in more than one study (past falls, living alone, use of walking aid, depression, cognitive deficit, and more than six medications) and meta-analysis was performed for these risk factors. One screening instrument was sufficiently accurate to identify a subset of geriatric ED patients at low risk for falls with a negative LR of 0.11 (95% CI = 0.06 to 0.20). The test threshold was 6.6% and the treatment threshold was 27.5%. Conclusions This study demonstrates the paucity of evidence in the literature regarding ED-based screening for risk of future falls among older adults. The screening tools and individual characteristics identified in this study provide an evidentiary basis on which to develop screening protocols for geriatrics adults in the ED to reduce fall risk PMID:25293956

  8. No-threshold dose-response curves for nongenotoxic chemicals: Findings and applications for risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehan, Daniel M.

    2006-01-15

    We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated onmore » a single graph. Response rates from {approx}1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate.« less

  9. [Objective and subjective requirement of aids and appliances in patients with neurogenic lower urinary tract dysfunction : Multicenter study to determinate the daily necessity of urological aids and appliances].

    PubMed

    Bremer, J; Böthig, R; Domurath, B; Kutzenberger, J; Kaufmann, A; Pretzer, J; Klask, J P; Geng, V; Vance, W; Kurze, I

    2016-12-01

    The provision of urological appliances for patients with neurogenic lower urinary tract dysfunction (NLUTD) is essential. Hitherto existing standard guidelines for the estimation of monthly material requirements are based solely on estimates. The goal of this work was to define the objective and subsequently subjective requirements for urological appliances on a scientifically validated basis. Data concerning bladder management and daily consumption of urological appliances for patients with NLUTD were collected through a standardized survey at six different centers in Germany during the period of October to December 2014 and statistically evaluated. In all, 767 patient records were analyzed: 543 men and 221 woman (N/A = 3). The daily disposable catheter consumption of 577 patients who exclusively used intermittent catheterization was 5.13. Patients who used other means of bladder emptying (n = 31) in addition to catheterization consumed on average 3.17 catheters. The margin of deviation was larger for children. Of the 608 patients with intermittent catheterization, 94 (15.5 %) required additional paddings as absorbent aids (on average 2.29 paddings per day), 34 patients (5.6 %) additionally used pants (2.55 per day) and 46 patients (7.6 %) utilized condom catheters (3.81 per day) between catheterization. Among all surveyed patients, 126 (16.4 %) used paddings (5.03 per day) and 51 patients (6.6 %) pants (3.03 per day). Of all male respondents 82 (15.1 %) used condom catheters (2.80 urinary sheaths per day). Applying twice the standard deviation of the mean as a measure of assessing the objective requirement of urological appliances and aids for adult patients with NLUTD allows the following daily thresholds to be defined: 1-9 disposable catheters, 0-7 urinary sheaths, 1-9 paddings and 0-7 pants. These thresholds can serve as a basis for estimating the subjective need. They allow for a scientifically validated benchmark for an economically feasible and patient-tailored supply with urological aids and appliances. Individually required appliances and aids have to be recognized. Verifiable quality standards need to be developed.

  10. Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology

    NASA Astrophysics Data System (ADS)

    Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.

    2015-01-01

    The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.

  11. Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters

    NASA Astrophysics Data System (ADS)

    Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon

    2018-04-01

    In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.

  12. An analysis of I/O efficient order-statistic-based techniques for noise power estimation in the HRMS sky survey's operational system

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Olsen, E. T.

    1992-01-01

    Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.

  13. Robust w-Estimators for Cryo-EM Class Means

    PubMed Central

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  14. Threshold concepts: implications for the management of natural resources

    USGS Publications Warehouse

    Guntenspergen, Glenn R.; Gross, John

    2014-01-01

    Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.

  15. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  16. The Utility of Selection for Military and Civilian Jobs

    DTIC Science & Technology

    1989-07-01

    parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not

  17. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  18. Cost-Savings Analysis of Renal Scintigraphy, Stratified by Renal Function Thresholds: Mercaptoacetyltriglycine Versus Diethylene Triamine Penta-Acetic Acid.

    PubMed

    Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J

    2016-07-01

    To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Cool, warm, and heat-pain detection thresholds: testing methods and inferences about anatomic distribution of receptors.

    PubMed

    Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C

    1993-08-01

    We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Adult norms of the perceptual threshold of touch (PTT) in the hands and feet in relation to age, gender, and right and left side using transcutaneous electrical nerve stimulation.

    PubMed

    Eek, Elsy; Holmqvist, Lotta Widén; Sommerfeld, Disa K

    2012-07-01

    There is a lack of standardized and quantifiable measures of touch function, for clinical work. Furthermore, it is not possible to make accurate diagnostic judgments of touch function before normative values are estimated. The objectives of this study were to establish adult norms of the perceptual threshold of touch (PTT) for the hands and feet according to age and gender and to determine the effect of right/left side, handedness, height, weight, and body mass index (BMI) on the PTT. The PTT was assessed by using a high-frequency transcutaneous electrical nerve stimulator (Hf/TENS) with self-adhesive skin electrodes in 346 adults. The PTT was identified as the level registered in mA at which the participants perceived a tingling sensation. The PTT for all participants was a median of 3.75 mA (range 2.50-7.25) in the hands and a median of 10.00 (range 5.00-30.00) in the feet. With increasing age an increase of the PTT was found. Men reported higher PTT than women. The right hand had higher PTT than the left. Handedness, height, weight, and BMI did not affect the PTT. Adult norms of the PTT in the hands for age, gender, and right/left side are presented for four age groups. The present study's estimate of the PTT in the hands could be used as adult norms. Adult norms for the feet could not be estimated because the PTT values in the feet showed a great variance.

  1. Diagnostic Performance of a Rapid Magnetic Resonance Imaging Method of Measuring Hepatic Steatosis

    PubMed Central

    House, Michael J.; Gan, Eng K.; Adams, Leon A.; Ayonrinde, Oyekoya T.; Bangma, Sander J.; Bhathal, Prithi S.; Olynyk, John K.; St. Pierre, Tim G.

    2013-01-01

    Objectives Hepatic steatosis is associated with an increased risk of developing serious liver disease and other clinical sequelae of the metabolic syndrome. However, visual estimates of steatosis from histological sections of biopsy samples are subjective and reliant on an invasive procedure with associated risks. The aim of this study was to test the ability of a rapid, routinely available, magnetic resonance imaging (MRI) method to diagnose clinically relevant grades of hepatic steatosis in a cohort of patients with diverse liver diseases. Materials and Methods Fifty-nine patients with a range of liver diseases underwent liver biopsy and MRI. Hepatic steatosis was quantified firstly using an opposed-phase, in-phase gradient echo, single breath-hold MRI methodology and secondly, using liver biopsy with visual estimation by a histopathologist and by computer-assisted morphometric image analysis. The area under the receiver operating characteristic (ROC) curve was used to assess the diagnostic performance of the MRI method against the biopsy observations. Results The MRI approach had high sensitivity and specificity at all hepatic steatosis thresholds. Areas under ROC curves were 0.962, 0.993, and 0.972 at thresholds of 5%, 33%, and 66% liver fat, respectively. MRI measurements were strongly associated with visual (r2 = 0.83) and computer-assisted morphometric (r2 = 0.84) estimates of hepatic steatosis from histological specimens. Conclusions This MRI approach, using a conventional, rapid, gradient echo method, has high sensitivity and specificity for diagnosing liver fat at all grades of steatosis in a cohort with a range of liver diseases. PMID:23555650

  2. Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)

    2001-01-01

    This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.

  3. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  4. Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.

    PubMed

    Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L

    2017-08-01

    Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.

  5. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  6. Evidence-based diagnostics: adult septic arthritis.

    PubMed

    Carpenter, Christopher R; Schuur, Jeremiah D; Everett, Worth W; Pines, Jesse M

    2011-08-01

    Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 10(9)-25 × 10(9)/L was 0.33; for 25 × 10(9)-50 × 10(9)/L, 1.06; for 50 × 10(9)-100 × 10(9)/L, 3.59; and exceeding 100 × 10(9)/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (-LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 10(9)/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 10(9)/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. © 2011 by the Society for Academic Emergency Medicine.

  7. Spontaneous Subarachnoid Hemorrhage: A Systematic Review and Meta-Analysis Describing the Diagnostic Accuracy of History, Physical Exam, Imaging, and Lumbar Puncture with an Exploration of Test Thresholds

    PubMed Central

    Carpenter, Christopher R.; Hussain, Adnan M.; Ward, Michael J.; Zipfel, Gregory J.; Fowler, Susan; Pines, Jesse M.; Sivilotti, Marco L.A.

    2016-01-01

    Background Spontaneous subarachnoid hemorrhage (SAH) is a rare, but serious etiology of headache. The diagnosis of SAH is especially challenging in alert, neurologically intact patients, as missed or delayed diagnosis can be catastrophic. Objectives To perform a diagnostic accuracy systematic review and meta-analysis of history, physical examination, cerebrospinal fluid (CSF) tests, computed tomography (CT), and clinical decision rules for spontaneous SAH. A secondary objective was to delineate probability of disease thresholds for imaging and lumbar puncture (LP). Methods PUBMED, EMBASE, SCOPUS, and research meeting abstracts were searched up to June 2015 for studies of emergency department (ED) patients with acute headache clinically concerning for spontaneous SAH. QUADAS-2 was used to assess study quality and, when appropriate, meta-analysis was conducted using random effects models. Outcomes were sensitivity, specificity, positive (LR+) and negative (LR−) likelihood ratios. To identify test- and treatment-thresholds, we employed the Pauker-Kassirer method with Bernstein test-indication curves using the summary estimates of diagnostic accuracy. Results A total of 5,022 publications were identified, of which 122 underwent full text-review; 22 studies were included (average SAH prevalence 7.5%). Diagnostic studies differed in assessment of history and physical exam findings, CT technology, analytical techniques used to identify xanthochromia, and criterion standards for SAH. Study quality by QUADAS-2 was variable; however, most had a relatively low-risk of biases. A history of neck pain (LR+ 4.1 [95% CI 2.2-7.6]) and neck stiffness on physical exam (LR+ 6.6 [4.0-11.0]) were the individual findings most strongly associated with SAH. Combinations of findings may rule out SAH, yet promising clinical decision rules await external validation. Non-contrast cranial CT within 6 hours of headache onset accurately ruled-in (LR+ 230 [6-8700]) and ruled-out SAH (LR− 0.01 [0-0.04]); CT beyond 6 hours had a LR− of 0.07 [0.01-0.61]. CSF analyses had lower diagnostic accuracy, whether using red blood cell (RBC) count or xanthochromia. At a threshold RBC count of 1,000 × 106/L, the LR+ was 5.7 [1.4-23] and LR− 0.21 [0.03-1.7]. Using the pooled estimates of diagnostic accuracy and testing risks and benefits, we estimate LP only benefits CT negative patients when the pre-LP probability of SAH is on the order of 5%, which corresponds to a pre-CT probability greater than 20%. Conclusions Less than one in ten headache patients concerning for SAH are ultimately diagnosed with SAH in recent studies. While certain symptoms and signs increase or decrease the likelihood of SAH, no single characteristic is sufficient to rule-in or rule-out SAH. Within 6 hours of symptom onset, non-contrast cranial CT is highly accurate, while a negative CT beyond 6 hours substantially reduces the likelihood of SAH. LP appears to benefit relatively few patients within a narrow pre-test probability range. With improvements in CT technology and an expanding body of evidence, test-thresholds for LP may become more precise, obviating the need for a post-CT LP in more acute headache patients. Existing SAH clinical decision rules await external validation, but offer the potential to identify subsets most likely to benefit from post-CT LP, angiography, or no further testing. PMID:27306497

  8. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  9. Taste Responses to Linoleic Acid: A Crowdsourced Population Study.

    PubMed

    Garneau, Nicole L; Nuessle, Tiffany M; Tucker, Robin M; Yao, Mengjie; Santorico, Stephanie A; Mattes, Richard D

    2017-10-31

    Dietary fats serve multiple essential roles in human health but may also contribute to acute and chronic health complications. Thus, understanding mechanisms that influence fat ingestion are critical. All sensory systems may contribute relevant cues to fat detection, with the most recent evidence supporting a role for the sense of taste. Taste detection thresholds for fat vary markedly between individuals and responses are not normally distributed. Genetics may contribute to these observations. Using crowdsourced data obtained from families visiting the Denver Museum of Nature & Science, our objective was to estimate the heritability of fat taste (oleogustus). A pedigree analysis was conducted with 106 families (643 individuals) who rated the fat taste intensity of graded concentrations of linoleic acid (LA) embedded in taste strips. The findings estimate that 19% (P = 0.043) of the variability of taste response to LA relative to baseline is heritable at the highest concentration tested. © The Author 2017. Published by Oxford University Press.

  10. Taste Responses to Linoleic Acid: A Crowdsourced Population Study

    PubMed Central

    Nuessle, Tiffany M; Tucker, Robin M; Yao, Mengjie; Santorico, Stephanie A; Mattes, Richard D

    2017-01-01

    Abstract Dietary fats serve multiple essential roles in human health but may also contribute to acute and chronic health complications. Thus, understanding mechanisms that influence fat ingestion are critical. All sensory systems may contribute relevant cues to fat detection, with the most recent evidence supporting a role for the sense of taste. Taste detection thresholds for fat vary markedly between individuals and responses are not normally distributed. Genetics may contribute to these observations. Using crowdsourced data obtained from families visiting the Denver Museum of Nature & Science, our objective was to estimate the heritability of fat taste (oleogustus). A pedigree analysis was conducted with 106 families (643 individuals) who rated the fat taste intensity of graded concentrations of linoleic acid (LA) embedded in taste strips. The findings estimate that 19% (P = 0.043) of the variability of taste response to LA relative to baseline is heritable at the highest concentration tested. PMID:28968903

  11. Approach for estimating the dynamic physical thresholds of phytoplankton production and biomass in the tropical-subtropical Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo

    2017-06-01

    Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.

  12. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V

    PubMed Central

    Zamba, Gideon K. D.; Artes, Paul H.

    2018-01-01

    Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822

  13. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  14. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  15. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  16. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  17. Sensory function assessment. A pilot comparison study of touch pressure threshold with texture and tactile discrimination.

    PubMed

    King, P M

    1997-01-01

    The purpose of this study was to determine if a correlation exists between touch-pressure threshold testing and sensory discrimination function, specifically tactile gnosis for texture and object recognition. Twenty-nine patients diagnosed with carpal tunnel syndrome (CTS), as confirmed by electromyography or nerve conduction velocity tests, were administered three sensibility tests: the Semmes-Weinstein monofilament test, a texture discrimination test, and an object identification test. Norms were established for texture and object recognition tests using 100 subjects (50 females and 50 males) with normal touch-pressure thresholds as assessed by the Semmes-Weinstein monofilament test. The CTS patients were grouped into three categories of sensibility as determined by their performance on the Semmes-Weinstein monofilament test: normal, diminished light touch, and diminished protective sensation. Through an independent t test statistical procedure, each of the three categories mean response times for identification of textures of objects were compared with the normed response times. Accurate responses were given for identification of all textures and objects. No significant difference (p < .05) was noted in mean response times of the CTS patients with normal touch-pressure thresholds. A significant difference (p < .05) in response times by those CTS patients with diminished light touch was detected in identification in four out of six objects. Subjects with diminished protective sensation had significantly longer response times (p < .05) for identification of the textures of cork, coarse and fine sandpaper, and rubber. Significantly longer response times were recorded by the same subjects for identification of such objects as a screw and a button, and for the shapes of a square, triangle, and oval.

  18. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  19. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  20. Optimal threshold estimation for binary classifiers using game theory.

    PubMed

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  1. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  2. Inclusion of Exercise Intensities Above the Lactate Threshold in VO2/Running Speed Regression Does not Improve the Precision of Accumulated Oxygen Deficit Estimation in Endurance-Trained Runners

    PubMed Central

    Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.

    2005-01-01

    The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560

  3. Representation of Vegetation and Other Nonerodible Elements in Aeolian Shear Stress Partitioning Models for Predicting Transport Threshold

    NASA Technical Reports Server (NTRS)

    King, James; Nickling, William G.; Gillies, John A.

    2005-01-01

    The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.

  4. Self-Monitoring of Listening Abilities in Normal-Hearing Children, Normal-Hearing Adults, and Children with Cochlear Implants

    PubMed Central

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings. PMID:22436118

  5. Three Dimensional Constraint Effects on the Estimated (Delta)CTOD during the Numerical Simulation of Different Fatigue Threshold Testing Techniques

    NASA Technical Reports Server (NTRS)

    Seshadri, Banavara R.; Smith, Stephen W.

    2007-01-01

    Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.

  6. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  7. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  8. Identifying insects with incomplete DNA barcode libraries, African fruit flies (Diptera: Tephritidae) as a test case.

    PubMed

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.

  9. Identifying Insects with Incomplete DNA Barcode Libraries, African Fruit Flies (Diptera: Tephritidae) as a Test Case

    PubMed Central

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600

  10. Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.

    PubMed

    Gilmour, Heather

    2016-11-16

    Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.

  11. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  12. Loop Gain Predicts the Response to Upper Airway Surgery in Patients With Obstructive Sleep Apnea.

    PubMed

    Joosten, Simon A; Leong, Paul; Landry, Shane A; Sands, Scott A; Terrill, Philip I; Mann, Dwayne; Turton, Anthony; Rangaswamy, Jhanavi; Andara, Christopher; Burgess, Glen; Mansfield, Darren; Hamilton, Garun S; Edwards, Bradley A

    2017-07-01

    Upper airway surgery is often recommended to treat patients with obstructive sleep apnea (OSA) who cannot tolerate continuous positive airways pressure. However, the response to surgery is variable, potentially because it does not improve the nonanatomical factors (ie, loop gain [LG] and arousal threshold) causing OSA. Measuring these traits clinically might predict responses to surgery. Our primary objective was to test the value of LG and arousal threshold to predict surgical success defined as 50% reduction in apnea-hypopnea index (AHI) and AHI <10 events/hour post surgery. We retrospectively analyzed data from patients who underwent upper airway surgery for OSA (n = 46). Clinical estimates of LG and arousal threshold were calculated from routine polysomnographic recordings presurgery and postsurgery (median of 124 [91-170] days follow-up). Surgery reduced both the AHI (39.1 ± 4.2 vs. 26.5 ± 3.6 events/hour; p < .005) and estimated arousal threshold (-14.8 [-22.9 to -10.2] vs. -9.4 [-14.5 to -6.0] cmH2O) but did not alter LG (0.45 ± 0.08 vs. 0.45 ± 0.12; p = .278). Responders to surgery had a lower baseline LG (0.38 ± 0.02 vs. 0.48 ± 0.01, p < .05) and were younger (31.0 [27.3-42.5] vs. 43.0 [33.0-55.3] years, p < .05) than nonresponders. Lower LG remained a significant predictor of surgical success after controlling for covariates (logistic regression p = .018; receiver operating characteristic area under curve = 0.80). Our study provides proof-of-principle that upper airway surgery most effectively resolves OSA in patients with lower LG. Predicting the failure of surgical treatment, consequent to less stable ventilatory control (elevated LG), can be achieved in the clinic and may facilitate avoidance of surgical failures. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  13. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  14. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  15. Dependence of cavitation, chemical effect, and mechanical effect thresholds on ultrasonic frequency.

    PubMed

    Thanh Nguyen, Tam; Asakura, Yoshiyuki; Koda, Shinobu; Yasuda, Keiji

    2017-11-01

    Cavitation, chemical effect, and mechanical effect thresholds were investigated in wide frequency ranges from 22 to 4880kHz. Each threshold was measured in terms of sound pressure at fundamental frequency. Broadband noise emitted from acoustic cavitation bubbles was detected by a hydrophone to determine the cavitation threshold. Potassium iodide oxidation caused by acoustic cavitation was used to quantify the chemical effect threshold. The ultrasonic erosion of aluminum foil was conducted to estimate the mechanical effect threshold. The cavitation, chemical effect, and mechanical effect thresholds increased with increasing frequency. The chemical effect threshold was close to the cavitation threshold for all frequencies. At low frequency below 98kHz, the mechanical effect threshold was nearly equal to the cavitation threshold. However, the mechanical effect threshold was greatly higher than the cavitation threshold at high frequency. In addition, the thresholds of the second harmonic and the first ultraharmonic signals were measured to detect bubble occurrence. The threshold of the second harmonic approximated to the cavitation threshold below 1000kHz. On the other hand, the threshold of the first ultraharmonic was higher than the cavitation threshold below 98kHz and near to the cavitation threshold at high frequency. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  17. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure.

    PubMed

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-08-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.

  18. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure

    PubMed Central

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-01-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076

  19. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  20. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  1. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  2. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina

    2016-01-01

    Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616

  3. A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1996-02-01

    The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.

  4. A novel threshold criterion in transcranial motor evoked potentials during surgery for gliomas close to the motor pathway.

    PubMed

    Abboud, Tammam; Schaper, Miriam; Dührsen, Lasse; Schwarz, Cindy; Schmidt, Nils Ole; Westphal, Manfred; Martens, Tobias

    2016-10-01

    OBJECTIVE Warning criteria for monitoring of motor evoked potentials (MEP) after direct cortical stimulation during surgery for supratentorial tumors have been well described. However, little is known about the value of MEP after transcranial electrical stimulation (TES) in predicting postoperative motor deficit when monitoring threshold level. The authors aimed to evaluate the feasibility and value of this method in glioma surgery by using a new approach for interpreting changes in threshold level involving contra- and ipsilateral MEP. METHODS Between November 2013 and December 2014, 93 patients underwent TES-MEP monitoring during resection of gliomas located close to central motor pathways but not involving the primary motor cortex. The MEP were elicited by transcranial repetitive anodal train stimulation. Bilateral MEP were continuously evaluated to assess percentage increase of threshold level (minimum voltage needed to evoke a stable motor response from each of the muscles being monitored) from the baseline set before dural opening. An increase in threshold level on the contralateral side (facial, arm, or leg muscles contralateral to the affected hemisphere) of more than 20% beyond the percentage increase on the ipsilateral side (facial, arm, or leg muscles ipsilateral to the affected hemisphere) was considered a significant alteration. Recorded alterations were subsequently correlated with postoperative neurological deterioration and MRI findings. RESULTS TES-MEP could be elicited in all patients, including those with recurrent glioma (31 patients) and preoperative paresis (20 patients). Five of 73 patients without preoperative paresis showed a significant increase in threshold level, and all of them developed new paresis postoperatively (transient in 4 patients and permanent in 1 patient). Eight of 20 patients with preoperative paresis showed a significant increase in threshold level, and all of them developed postoperative neurological deterioration (transient in 4 patients and permanent in 4 patients). In 80 patients no significant change in threshold level was detected, and none of them showed postoperative neurological deterioration. The specificity and sensitivity in this series were estimated at 100%. Postoperative MRI revealed gross-total tumor resection in 56 of 82 patients (68%) in whom complete tumor resection was attainable; territorial ischemia was detected in 4 patients. CONCLUSIONS The novel threshold criterion has made TES-MEP a useful method for predicting postoperative motor deficit in patients who undergo glioma surgery, and has been feasible in patients with preoperative paresis as well as in patients with recurrent glioma. Including contra- and ipsilateral changes in threshold level has led to a high sensitivity and specificity.

  5. ESTIMATING THE BENEFIT OF TRMM TROPICAL CYCLONE DATA IN SAVING LIVES

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.

    2005-01-01

    The Tropical Rainfall Measuring Mission (TRMM) is a joint NASA/JAXA research mission launched in late 1997 to improve our knowledge of tropical rainfall processes and climatology (Kummerow et ai., 2000; Adler et ai., 2003). In addition to being a highly successful research mission, its data are available in real time and operational weather agencies in the U.S. and internationally are using TRMM data and images to monitor and forecast hazardous weather (tropical cyclones, floods, etc.). For example, in 2004 TRMM data were used 669 times for determining tropical cyclone location fixes (National Research Council, 2004). TRMM flies at a relatively low altitude, 400 km, and requires orbit adjustment maneuvers to maintain altitude against the small drag of the atmosphere. There is enough fuel used for these maneuvers remaining on TRMM for the satellite to continue flying until 2011-12. However, most of the remaining fuel may be used to perform a controlled re-entry of the satellite into the Pacific Ocean. The fuel threshold for this operation will be reached in the summer of 2005, although the maneuver would actually occur in late 2006 or 2007. The full science mission would end in 2005 under the controlled re-entry option. This re-entry option is related to the estimated probability of injury (1/5,000) that might occur during an uncontrolled re-entry of the satellite. If the estimated probability of injury exceeds 1/10,000 a satellite is a candidate for a possible controlled re-entry. In the TRMM case the NASA Safety Office examined the related issues and concluded that, although TRMM exceeded the formal threshold, the use of TRMM data in the monitoring and forecasting of hazardous weather gave a public safety benefit that compensated for TRMM slightly exceeding the orbital debris threshold (Martin, 2002). This conclusion was based in part on results of an independent panel during a workshop on benefits of TRMM data in concluded that the benefit of TRMM data in saving lives through its use in operational forecasting could not be quantified. The objective of this paper is to describe a possible technique to estimate the number of lives saved per year and apply it to the TRMM case and the use of its data in monitoring and forecasting tropical cyclones.

  6. Comparisons of two moments‐based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    USGS Publications Warehouse

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-01-01

    The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  7. Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    NASA Astrophysics Data System (ADS)

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-09-01

    The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  8. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams

    USGS Publications Warehouse

    Black, R.W.; Moran, P.W.; Frankforter, J.D.

    2011-01-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).

  9. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams.

    PubMed

    Black, Robert W; Moran, Patrick W; Frankforter, Jill D

    2011-04-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.

  10. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  11. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  12. Reliability and validity of a short form household food security scale in a Caribbean community.

    PubMed

    Gulliford, Martin C; Mahabir, Deepak; Rocke, Brian

    2004-06-16

    We evaluated the reliability and validity of the short form household food security scale in a different setting from the one in which it was developed. The scale was interview administered to 531 subjects from 286 households in north central Trinidad in Trinidad and Tobago, West Indies. We evaluated the six items by fitting item response theory models to estimate item thresholds, estimating agreement among respondents in the same households and estimating the slope index of income-related inequality (SII) after adjusting for age, sex and ethnicity. Item-score correlations ranged from 0.52 to 0.79 and Cronbach's alpha was 0.87. Item responses gave within-household correlation coefficients ranging from 0.70 to 0.78. Estimated item thresholds (standard errors) from the Rasch model ranged from -2.027 (0.063) for the 'balanced meal' item to 2.251 (0.116) for the 'hungry' item. The 'balanced meal' item had the lowest threshold in each ethnic group even though there was evidence of differential functioning for this item by ethnicity. Relative thresholds of other items were generally consistent with US data. Estimation of the SII, comparing those at the bottom with those at the top of the income scale, gave relative odds for an affirmative response of 3.77 (95% confidence interval 1.40 to 10.2) for the lowest severity item, and 20.8 (2.67 to 162.5) for highest severity item. Food insecurity was associated with reduced consumption of green vegetables after additionally adjusting for income and education (0.52, 0.28 to 0.96). The household food security scale gives reliable and valid responses in this setting. Differing relative item thresholds compared with US data do not require alteration to the cut-points for classification of 'food insecurity without hunger' or 'food insecurity with hunger'. The data provide further evidence that re-evaluation of the 'balanced meal' item is required.

  13. Complex Variation in Measures of General Intelligence and Cognitive Change

    PubMed Central

    Rowe, Suzanne J.; Rowlatt, Amy; Davies, Gail; Harris, Sarah E.; Porteous, David J.; Liewald, David C.; McNeill, Geraldine; Starr, John M.

    2013-01-01

    Combining information from multiple SNPs may capture a greater amount of genetic variation than from the sum of individual SNP effects and help identifying missing heritability. Regions may capture variation from multiple common variants of small effect, multiple rare variants or a combination of both. We describe regional heritability mapping of human cognition. Measures of crystallised (gc) and fluid intelligence (gf) in late adulthood (64–79 years) were available for 1806 individuals genotyped for 549,692 autosomal single nucleotide polymorphisms (SNPs). The same individuals were tested at age 11, enabling us the rare opportunity to measure cognitive change across most of their lifespan. 547,750 SNPs ranked by position are divided into 10, 908 overlapping regions of 101 SNPs to estimate the genetic variance each region explains, an approach that resembles classical linkage methods. We also estimate the genetic variation explained by individual autosomes and by SNPs within genes. Empirical significance thresholds are estimated separately for each trait from whole genome scans of 500 permutated data sets. The 5% significance threshold for the likelihood ratio test of a single region ranged from 17–17.5 for the three traits. This is the equivalent to nominal significance under the expectation of a chi-squared distribution (between 1df and 0) of P<1.44×10−5. These thresholds indicate that the distribution of the likelihood ratio test from this type of variance component analysis should be estimated empirically. Furthermore, we show that estimates of variation explained by these regions can be grossly overestimated. After applying permutation thresholds, a region for gf on chromosome 5 spanning the PRRC1 gene is significant at a genome-wide 10% empirical threshold. Analysis of gene methylation on the temporal cortex provides support for the association of PRRC1 and fluid intelligence (P = 0.004), and provides a prime candidate gene for high throughput sequencing of these uniquely informative cohorts. PMID:24349040

  14. Discrimination of enclosed images by weighted storage in an optical associative memory

    NASA Astrophysics Data System (ADS)

    Duelli, M.; Cudney, R. S.; Günter, P.

    1996-02-01

    We present an all-optical associative memory that can distinguish objects that are enclosed by or strongly overlap other objects. This is done by appropriately weighting the exposure of the stored images during recording. The images to be recalled associatively are stored in a photorefractive LiNbO 3 crystal via angular multiplexing. Thresholding of the reconstructed reference beams during associative readout is achieved by using a saturable absorber with an intensity tunable threshold.

  15. Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

    2008-03-01

    Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

  16. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  17. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ

    PubMed Central

    Curtis, K. Alexandra; Moore, Jeffrey E.; Benson, Scott R.

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the “jeopardy” standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international management frameworks. Given the range of abundance data scenarios addressed, LRPs should be estimable for many other areas, populations, and taxa. PMID:26368557

  18. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ.

    PubMed

    Curtis, K Alexandra; Moore, Jeffrey E; Benson, Scott R

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the "jeopardy" standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international management frameworks. Given the range of abundance data scenarios addressed, LRPs should be estimable for many other areas, populations, and taxa.

  19. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  20. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  1. Colour thresholding and objective quantification in bioimaging

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  2. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  3. First-principles simulation of the optical response of bulk and thin-film α-quartz irradiated with an ultrashort intense laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr

    A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less

  4. On the estimation of risk associated with an attenuation prediction

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1992-01-01

    Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.

  5. On estimating scale invariance in stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Seze, Genevieve; Smith, Leonard A.

    1990-01-01

    Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.

  6. Energy thresholds of discrete breathers in thermal equilibrium and relaxation processes.

    PubMed

    Ming, Yi; Ling, Dong-Bo; Li, Hui-Min; Ding, Ze-Jun

    2017-06-01

    So far, only the energy thresholds of single discrete breathers in nonlinear Hamiltonian systems have been analytically obtained. In this work, the energy thresholds of discrete breathers in thermal equilibrium and the energy thresholds of long-lived discrete breathers which can remain after a long time relaxation are analytically estimated for nonlinear chains. These energy thresholds are size dependent. The energy thresholds of discrete breathers in thermal equilibrium are the same as the previous analytical results for single discrete breathers. The energy thresholds of long-lived discrete breathers in relaxation processes are different from the previous results for single discrete breathers but agree well with the published numerical results known to us. Because real systems are either in thermal equilibrium or in relaxation processes, the obtained results could be important for experimental detection of discrete breathers.

  7. Mechanisms of breathing instability in patients with obstructive sleep apnea.

    PubMed

    Younes, Magdy; Ostrowski, Michele; Atkar, Raj; Laprairie, John; Siemens, Andrea; Hanly, Patrick

    2007-12-01

    The response to chemical stimuli (chemical responsiveness) and the increases in respiratory drive required for arousal (arousal threshold) and for opening the airway without arousal (effective recruitment threshold) are important determinants of ventilatory instability and, hence, severity of obstructive apnea. We measured these variables in 21 obstructive apnea patients (apnea-hypopnea index 91 +/- 24 h(-1)) while on continuous-positive-airway pressure. During sleep, pressure was intermittently reduced (dial down) to induce severe hypopneas. Dial downs were done on room air and following approximately 30 s of breathing hypercapneic and/or hypoxic mixtures, which induced a range of ventilatory stimulation before dial down. Ventilation just before dial down and flow during dial down were measured. Chemical responsiveness, estimated as the percent increase in ventilation during the 5(th) breath following administration of 6% CO(2) combined with approximately 4% desaturation, was large (187 +/- 117%). Arousal threshold, estimated as the percent increase in ventilation associated with a 50% probability of arousal, ranged from 40% to >268% and was <120% in 12/21 patients, indicating that in many patients arousal occurs with modest changes in chemical drive. Effective recruitment threshold, estimated as percent increase in pre-dial-down ventilation associated with a significant increase in dial-down flow, ranged from zero to >174% and was <110% in 12/21 patients, indicating that in many patients reflex dilatation occurs with modest increases in drive. The two thresholds were not correlated. In most OSA patients, airway patency may be maintained with only modest increases in chemical drive, but instability results because of a low arousal threshold and a brisk increase in drive following brief reduction in alveolar ventilation.

  8. Cost-effectiveness analysis of lapatinib in HER-2-positive advanced breast cancer.

    PubMed

    Le, Quang A; Hay, Joel W

    2009-02-01

    A recent clinical trial demonstrated that the addition of lapatinib to capecitabine in the treatment of HER-2-positive advanced breast cancer (ABC) significantly increases median time to progression. The objective of the current analysis was to assess the cost-effectiveness of this therapy from the US societal perspective. A Markov model comprising 4 health states (stable disease, respond-to-therapy, disease progression, and death) was developed to estimate the projected-lifetime clinical and economic implications of this therapy. The model used Monte Carlo simulation to imitate the clinical course of a typical patient with ABC and updated with response rates and major adverse effects. Transition probabilities were estimated based on the results from the EGF100151 and EGF20002 clinical trials of lapatinib. Health state utilities, direct and indirect costs of the therapy, major adverse events, laboratory tests, and costs of disease progression were obtained from published sources. The model used a 3% discount rate and reported in 2007 US dollars. Over a lifetime, the addition of lapatinib to capecitabine as combination therapy was estimated to cost an additional $19,630, with an expected gain of 0.12 quality-adjusted life years (QALY) or an incremental cost-effectiveness ratio (ICER) of $166,113 per QALY gained. The 95% confidence limits of the ICER ranged from $158,000 to $215,000/QALY. A cost-effectiveness acceptability curve indicated less than 1% probability that the ICER would be lower than $100,000/QALY. Compared with commonly accepted willingness-to-pay thresholds in oncology treatment, the addition of lapatinib to capecitabine is not clearly cost-effective; and most likely to result in an ICER somewhat higher than the societal willingness-to-pay threshold limits. (c) 2008 American Cancer Society.

  9. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  10. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  11. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation

    PubMed Central

    Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.

    2013-01-01

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479

  12. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation.

    PubMed

    Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A

    2013-10-22

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.

  13. Reasoning in psychosis: risky but not necessarily hasty.

    PubMed

    Moritz, Steffen; Scheu, Florian; Andreou, Christina; Pfueller, Ute; Weisbrod, Matthias; Roesch-Ely, Daniela

    2016-01-01

    A liberal acceptance (LA) threshold for hypotheses has been put forward to explain the well-replicated "jumping to conclusions" (JTC) bias in psychosis, particularly in patients with paranoid symptoms. According to this account, schizophrenia patients rest their decisions on lower subjective probability estimates. The initial formulation of the LA account also predicts an absence of the JTC bias under high task ambiguity (i.e., if more than one response option surpasses the subjective acceptance threshold). Schizophrenia patients (n = 62) with current or former delusions and healthy controls (n = 30) were compared on six scenarios of a variant of the beads task paradigm. Decision-making was assessed under low and high task ambiguity. Along with decision judgments (optional), participants were required to provide probability estimates for each option in order to determine decision thresholds (i.e., the probability the individual deems sufficient for a decision). In line with the LA account, schizophrenia patients showed a lowered decision threshold compared to controls (82% vs. 93%) which predicted both more errors and less draws to decisions. Group differences on thresholds were comparable across conditions. At the same time, patients did not show hasty decision-making, reflecting overall lowered probability estimates in patients. Results confirm core predictions derived from the LA account. Our results may (partly) explain why hasty decision-making is sometimes aggravated and sometimes abolished in psychosis. The proneness to make risky decisions may contribute to the pathogenesis of psychosis. A revised LA account is put forward.

  14. Monitoring of Tumor Growth with [(18)F]-FET PET in a Mouse Model of Glioblastoma: SUV Measurements and Volumetric Approaches.

    PubMed

    Holzgreve, Adrien; Brendel, Matthias; Gu, Song; Carlsen, Janette; Mille, Erik; Böning, Guido; Mastrella, Giorgia; Unterrainer, Marcus; Gildehaus, Franz J; Rominger, Axel; Bartenstein, Peter; Kälin, Roland E; Glass, Rainer; Albert, Nathalie L

    2016-01-01

    Noninvasive tumor growth monitoring is of particular interest for the evaluation of experimental glioma therapies. This study investigates the potential of positron emission tomography (PET) using O-(2-(18)F-fluoroethyl)-L-tyrosine ([(18)F]-FET) to determine tumor growth in a murine glioblastoma (GBM) model-including estimation of the biological tumor volume (BTV), which has hitherto not been investigated in the pre-clinical context. Fifteen GBM-bearing mice (GL261) and six control mice (shams) were investigated during 5 weeks by PET followed by autoradiographic and histological assessments. [(18)F]-FET PET was quantitated by calculation of maximum and mean standardized uptake values within a universal volume-of-interest (VOI) corrected for healthy background (SUVmax/BG, SUVmean/BG). A partial volume effect correction (PVEC) was applied in comparison to ex vivo autoradiography. BTVs obtained by predefined thresholds for VOI definition (SUV/BG: ≥1.4; ≥1.6; ≥1.8; ≥2.0) were compared to the histologically assessed tumor volume (n = 8). Finally, individual "optimal" thresholds for BTV definition best reflecting the histology were determined. In GBM mice SUVmax/BG and SUVmean/BG clearly increased with time, however at high inter-animal variability. No relevant [(18)F]-FET uptake was observed in shams. PVEC recovered signal loss of SUVmean/BG assessment in relation to autoradiography. BTV as estimated by predefined thresholds strongly differed from the histology volume. Strikingly, the individual "optimal" thresholds for BTV assessment correlated highly with SUVmax/BG (ρ = 0.97, p < 0.001), allowing SUVmax/BG-based calculation of individual thresholds. The method was verified by a subsequent validation study (n = 15, ρ = 0.88, p < 0.01) leading to extensively higher agreement of BTV estimations when compared to histology in contrast to predefined thresholds. [(18)F]-FET PET with standard SUV measurements is feasible for glioma imaging in the GBM mouse model. PVEC is beneficial to improve accuracy of [(18)F]-FET PET SUV quantification. Although SUVmax/BG and SUVmean/BG increase during the disease course, these parameters do not correlate with the respective tumor size. For the first time, we propose a histology-verified method allowing appropriate individual BTV estimation for volumetric in vivo monitoring of tumor growth with [(18)F]-FET PET and show that standardized thresholds from routine clinical practice seem to be inappropriate for BTV estimation in the GBM mouse model.

  15. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  16. Objective classification of historical tropical cyclone intensity

    NASA Astrophysics Data System (ADS)

    Chenoweth, Michael

    2007-03-01

    Preinstrumental records of historical tropical cyclone activity require objective methods for accurately categorizing tropical cyclone intensity. Here wind force terms and damage reports from newspaper accounts in the Lesser Antilles and Jamaica for the period 1795-1879 are compared with wind speed estimates calculated from barometric pressure data. A total of 95 separate barometric pressure readings and colocated simultaneous wind force descriptors and wind-induced damage reports are compared. The wind speed estimates from barometric pressure data are taken as the most reliable and serve as a standard to compare against other data. Wind-induced damage reports are used to produce an estimated wind speed range using a modified Fujita scale. Wind force terms are compared with the barometric pressure data to determine if a gale, as used in the contemporary newspapers, is consistent with the modern definition of a gale. Results indicate that the modern definition of a gale (the threshold point separating the classification of a tropical depression from a tropical storm) is equivalent to that in contemporary newspaper accounts. Barometric pressure values are consistent with both reported wind force terms and wind damage on land when the location, speed and direction of movement of the tropical cyclone are determined. Damage reports and derived wind force estimates are consistent with other published results. Biases in ships' logbooks are confirmed and wind force terms of gale strength or greater are identified. These results offer a bridge between the earlier noninstrumental records of tropical cyclones and modern records thereby offering a method of consistently classifying storms in the Caribbean region into tropical depressions, tropical storms, nonmajor and major hurricanes.

  17. A review of statistical methods to analyze extreme precipitation and temperature events in the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini

    2018-04-01

    The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.

  18. Psychophysical estimation of speed discrimination. II. Aging effects

    NASA Astrophysics Data System (ADS)

    Raghuram, Aparna; Lakshminarayanan, Vasudevan; Khanna, Ritu

    2005-10-01

    We studied the effects of aging on a speed discrimination task using a pair of first-order drifting luminance gratings. Two reference speeds of 2 and 8 deg/s were presented at stimulus durations of 500 ms and 1000 ms. The choice of stimulus parameters, etc., was determined in preliminary experiments and described in Part I. Thresholds were estimated using a two-alternative-forced-choice staircase methodology. Data were collected from 16 younger subjects (mean age 24 years) and 17 older subjects (mean age 71 years). Results showed that thresholds for speed discrimination were higher for the older age group. This was especially true at stimulus duration of 500 ms for both slower and faster speeds. This could be attributed to differences in temporal integration of speed with age. Visual acuity and contrast sensitivity were not statistically observed to mediate age differences in the speed discrimination thresholds. Gender differences were observed in the older age group, with older women having higher thresholds.

  19. Threshold corrections to the bottom quark mass revisited

    DOE PAGES

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart

    2015-05-19

    Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less

  20. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    NASA Astrophysics Data System (ADS)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  1. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    PubMed

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  2. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  3. Estimating soil moisture exceedance probability from antecedent rainfall

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.

    2016-12-01

    The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.

  4. A Cost-Effectiveness Analysis of the Swedish Universal Parenting Program All Children in Focus

    PubMed Central

    Ulfsdotter, Malin

    2015-01-01

    Objective There are few health economic evaluations of parenting programs with quality-adjusted life-years (QALYs) as the outcome measure. The objective of this study was, therefore, to conduct a cost-effectiveness analysis of the universal parenting program All Children in Focus (ABC). The goals were to estimate the costs of program implementation, investigate the health effects of the program, and examine its cost-effectiveness. Methods A cost-effectiveness analysis was conducted. Costs included setup costs and operating costs. A parent proxy Visual Analog Scale was used to measure QALYs in children, whereas the General Health Questionnaire-12 was used for parents. A societal perspective was adopted, and the incremental cost-effectiveness ratio was calculated. To account for uncertainty in the estimate, the probability of cost-effectiveness was investigated, and sensitivity analyses were used to account for the uncertainty in cost data. Results The cost was €326.3 per parent, of which €53.7 represented setup costs under the assumption that group leaders on average run 10 groups, and €272.6 was the operating costs. For health effects, the QALY gain was 0.0042 per child and 0.0027 per parent. These gains resulted in an incremental cost-effectiveness ratio for the base case of €47 290 per gained QALY. The sensitivity analyses resulted in ratios from €41 739 to €55 072. With the common Swedish threshold value of €55 000 per QALY, the probability of the ABC program being cost-effective was 50.8 percent. Conclusion Our analysis of the ABC program demonstrates cost-effectiveness ratios below or just above the QALY threshold in Sweden. However, due to great uncertainty about the data, the health economic rationale for implementation should be further studied considering a longer time perspective, effects on siblings, and validated measuring techniques, before full scale implementation. PMID:26681349

  5. Eddy current crack detection capability assessment approach using crack specimens with differing electrical conductivity

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2018-03-01

    Like other NDE methods, eddy current surface crack detectability is determined using probability of detection (POD) demonstration. The POD demonstration involves eddy current testing of surface crack specimens with known crack sizes. Reliably detectable flaw size, denoted by, a90/95 is determined by statistical analysis of POD test data. The surface crack specimens shall be made from a similar material with electrical conductivity close to the part conductivity. A calibration standard with electro-discharged machined (EDM) notches is typically used in eddy current testing for surface crack detection. The calibration standard conductivity shall be within +/- 15% of the part conductivity. This condition is also applicable to the POD demonstration crack set. Here, a case is considered, where conductivity of the crack specimens available for POD testing differs by more than 15% from that of the part to be inspected. Therefore, a direct POD demonstration of reliably detectable flaw size is not applicable. Additional testing is necessary to use the demonstrated POD test data. An approach to estimate the reliably detectable flaw size in eddy current testing for part made from material A using POD crack specimens made from material B with different conductivity is provided. The approach uses additional test data obtained on EDM notch specimens made from materials A and B. EDM notch test data from the two materials is used to create a transfer function between the demonstrated a90/95 size on crack specimens made of material B and the estimated a90/95 size for part made of material A. Two methods are given. For method A, a90/95 crack size for material B is given and POD data is available. Objective of method A is to determine a90/95 crack size for material A using the same relative decision threshold that was used for material B. For method B, target crack size a90/95 for material A is known. Objective is to determine decision threshold for inspecting material A.

  6. Figure-ground segregation by motion contrast and by luminance contrast.

    PubMed

    Regan, D; Beverley, K I

    1984-05-01

    Some naturally camouflaged objects are invisible unless they move; their boundaries are then defined by motion contrast between object and background. We compared the visual detection of such camouflaged objects with the detection of objects whose boundaries were defined by luminance contrast. The summation field area is 0.16 deg2 , and the summation time constant is 750 msec for parafoveally viewed objects whose boundaries are defined by motion contrast; these values are, respectively, about 5 and 12 times larger than the corresponding values for objects defined by luminance contrast. The log detection threshold is proportional to the eccentricity for a camouflaged object of constant area. The effect of eccentricity on threshold is less for large objects than for small objects. The log summation field diameter for detecting camouflaged objects is roughly proportional to the eccentricity, increasing to about 20 deg at 32-deg eccentricity. In contrast to the 100:1 increase of summation area for detecting camouflaged objects, the temporal summation time constant changes by only 40% between eccentricities of 0 and 16 deg.

  7. Denoising forced-choice detection data.

    PubMed

    García-Pérez, Miguel A

    2010-02-01

    Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.

  8. Estimating economic thresholds for pest control: an alternative procedure.

    PubMed

    Ramirez, O A; Saunders, J L

    1999-04-01

    An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.

  9. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  10. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  11. Temporal resolution in children.

    PubMed

    Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D

    1989-06-01

    The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.

  12. Accurate aging of juvenile salmonids using fork lengths

    USGS Publications Warehouse

    Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua

    2017-01-01

    Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.

  13. Thresholds in the response of free-floating plant abundance to variation in hydraulic connectivity, nutrients, and macrophyte abundance in a large floodplain river

    USGS Publications Warehouse

    Giblin, Shawn M.; Houser, Jeffrey N.; Sullivan, John F.; Langrehr, H.A.; Rogala, James T.; Campbell, Benjamin D.

    2014-01-01

    Duckweed and other free-floating plants (FFP) can form dense surface mats that affect ecosystem condition and processes, and can impair public use of aquatic resources. FFP obtain their nutrients from the water column, and the formation of dense FFP mats can be a consequence and indicator of river eutrophication. We conducted two complementary surveys of diverse aquatic areas of the Upper Mississippi River as an in situ approach for estimating thresholds in the response of FFP abundance to nutrient concentration and physical conditions in a large, floodplain river. Local regression analysis was used to estimate thresholds in the relations between FFP abundance and phosphorus (P) concentration (0.167 mg l−1L), nitrogen (N) concentration (0.808 mg l−1), water velocity (0.095 m s−1), and aquatic macrophyte abundance (65 % cover). FFP tissue concentrations suggested P limitation was more likely in spring, N limitation was more likely in late summer, and N limitation was most likely in backwaters with minimal hydraulic connection to the channel. The thresholds estimated here, along with observed patterns in nutrient limitation, provide river scientists and managers with criteria to consider when attempting to modify FFP abundance in off-channel areas of large river systems.

  14. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology.

    PubMed

    Insel, Philip S; Mattsson, Niklas; Mackin, R Scott; Schöll, Michael; Nosheny, Rachel L; Tosun, Duygu; Donohue, Michael C; Aisen, Paul S; Jagust, William J; Weiner, Michael W

    2016-05-17

    To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. © 2016 American Academy of Neurology.

  15. Modeling of digital mammograms using bicubic spline functions and additive noise

    NASA Astrophysics Data System (ADS)

    Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne

    1998-09-01

    The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.

  16. Prevalence and severity of eating disorders: A comparison of DSM-IV and DSM-5 among German adolescents.

    PubMed

    Ernst, Verena; Bürger, Arne; Hammerle, Florian

    2017-11-01

    Changes in the DSM-5 eating disorders criteria sought to increase the clarity of the diagnostic categories and to decrease the preponderance of nonspecified eating disorders. The first objective of this study was to analyze how these revisions affect threshold and EDNOS/OSFED eating disorder diagnoses in terms of prevalence, sex ratios, and diagnostic distribution in a student sample. Second, we aimed to compare the impairment levels of participants with a threshold, an EDNOS/OSFED and no diagnosis using both DSM-IV and DSM-5. A sample of 1654 7th and 8th grade students completed self-report questionnaires to determine diagnoses and impairment levels in the context of an eating disorder prevention program in nine German secondary schools. Height and weight were measured. The prevalence of threshold disorders increased from .48% (DSM-IV) to 1.15% (DSM-5). EDNOS disorders increased from 2.90 to 6.23% when using OSFED-categories. A higher proportion of girls was found throughout all the diagnostic categories, and the sex ratios remained stable. The effect sizes of DSM-5 group differences regarding impairment levels were equal to or larger than those of the DSM-IV comparisons, ranging from small to medium. We provide an in-depth overview of changes resulting from the revisions of DSM eating disorder criteria in a German adolescent sample. Despite the overall increase in prevalence estimates, the results suggest that the DSM-5 criteria differentiate participants with threshold disorders and OSFED from those no diagnosis as well as or even more distinctly than the DSM-IV criteria. © 2017 Wiley Periodicals, Inc.

  17. Heat and risk of myocardial infarction: hourly level case-crossover analysis of MINAP database

    PubMed Central

    Armstrong, Ben; Hajat, Shakoor; Haines, Andy; Wilkinson, Paul; Smeeth, Liam

    2012-01-01

    Objective To quantify the association between exposure to higher temperatures and the risk of myocardial infarction at an hourly temporal resolution. Design Case-crossover study. Setting England and Wales Myocardial Ischaemia National Audit Project (MINAP) database. Participants 24 861 hospital admissions for myocardial infarction occurring in 11 conurbations during the warmest months (June to August) of the years 2003-09. Main outcome measure Odds ratio of myocardial infarction for a 1°C increase in temperature. Results Strong evidence was found for an effect of heat acting 1-6 hours after exposure to temperatures above an estimated threshold of 20°C (95% confidence interval 16°C to 25°C). For each 1°C increase in temperature above this threshold, the risk of myocardial infarction increased by 1.9% (0.5% to 3.3%, P=0.009). Later reductions in risk seemed to offset early increases in risk: the cumulative effect of a 1°C rise in temperature above the threshold was 0.2% (−2.1% to 2.5%) by the end of the third day after exposure. Conclusions Higher ambient temperatures above a threshold of 20°C seem to be associated with a transiently increased risk of myocardial infarction 1-6 hours after exposure. Reductions in risk at longer lags are consistent with heat triggering myocardial infarctions early in highly vulnerable people who would otherwise have had a myocardial infarction some time later (“short term displacement”). Policies aimed at reducing the health effects of hot weather should include consideration of effects operating at sub-daily timescales. PMID:23243290

  18. Chlorine residuals and haloacetic acid reduction in rapid sand filtration.

    PubMed

    Chuang, Yi-Hsueh; Wang, Gen-Shuch; Tung, Hsin-hsin

    2011-11-01

    It is quite rare to find biodegradation in rapid sand filtration for drinking water treatment. This might be due to frequent backwashes and low substrate levels. High chlorine concentrations may inhibit biofilm development, especially for plants with pre-chlorination. However, in tropical or subtropical regions, bioactivity on the sand surface may be quite significant due to high biofilm development--a result of year-round high temperature. The objective of this study is to explore the correlation between biodegradation and chlorine concentration in rapid sand filters, especially for the water treatment plants that practise pre-chlorination. In this study, haloacetic acid (HAA) biodegradation was found in conventional rapid sand filters practising pre-chlorination. Laboratory column studies and field investigations were conducted to explore the association between the biodegradation of HAAs and chlorine concentrations. The results showed that chlorine residual was an important factor that alters bioactivity development. A model based on filter influent and effluent chlorine was developed for determining threshold chlorine for biodegradation. From the model, a temperature independent chlorine concentration threshold (Cl(threshold)) for biodegradation was estimated at 0.46-0.5mgL(-1). The results imply that conventional filters with adequate control could be conducive to bioactivity, resulting in lower HAA concentrations. Optimizing biodegradable disinfection by-product removal in conventional rapid sand filter could be achieved with minor variation and a lower-than-Cl(threshold) influent chlorine concentration. Bacteria isolation was also carried out, successfully identifying several HAA degraders. These degraders are very commonly seen in drinking water systems and can be speculated as the main contributor of HAA loss. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Low-cost Assessment for Early Vigor and Canopy Cover Estimation in Durum Wheat Using RGB Images.

    NASA Astrophysics Data System (ADS)

    Fernandez-Gallego, J. A.; Kefauver, S. C.; Aparicio Gutiérrez, N.; Nieto-Taladriz, M. T.; Araus, J. L.

    2017-12-01

    Early vigor and canopy cover is an important agronomical component for determining grain yield in wheat. Estimates of the canopy cover area at early stages of the crop cycle may contribute to efficiency of crop management practices and breeding programs. Canopy-image segmentation is complicated in field conditions by numerous factors, including soil, shadows and unexpected objects, such as rocks, weeds, plant remains, or even part of the photographer's boots (many times it appears in the scene); and the algorithms must be robust to accommodate these conditions. Field trials were carried out in two sites (Aranjuez and Valladolid, Spain) during the 2016/2017 crop season. A set of 24 varieties of durum wheat in two growing conditions (rainfed and support irrigation) per site were used to create the image database. This work uses zenithal RGB images taken from above the crop in natural light conditions. The images were taken with Canon IXUS 320HS camera in Aranjuez, holding the camera by hand, and with a Nikon D300 camera in Valladolid, using a monopod. The algorithm for early vigor and canopy cover area estimation uses three main steps: (i) Image decorrelation (ii) Colour space transformation and (iii) Canopy cover segmentation using an automatic threshold based on the image histogram. The first step was chosen to enhance the visual interpretation and separate the pixel colors into the scene; the colour space transformation contributes to further separate the colours. Finally an automatic threshold using a minimum method allows for correct segmentation and quantification of the canopy pixels. The percent of area covered by the canopy was calculated using a simple algorithm for counting pixels in the final binary segmented image. The comparative results demonstrate the algorithm's effectiveness through significant correlations between early vigor and canopy cover estimation compared to NDVI (Normalized difference vegetation index) and grain yield.

  20. Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models

    PubMed Central

    Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.

    2016-01-01

    IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818

  1. Minimum follow-up time required for the estimation of statistical cure of cancer patients: verification using data from 42 cancer sites in the SEER database

    PubMed Central

    Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent

    2005-01-01

    Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508

  2. Enhanced object-based tracking algorithm for convective rain storms and cells

    NASA Astrophysics Data System (ADS)

    Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick

    2018-03-01

    This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.

  3. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  4. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  5. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  6. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  7. Automatic estimation of detector radial position for contoured SPECT acquisition using CT images on a SPECT/CT system.

    PubMed

    Liu, Ruijie Rachel; Erwin, William D

    2006-08-01

    An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.

  8. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  9. Reverse lactate threshold: a novel single-session approach to reliable high-resolution estimation of the anaerobic threshold.

    PubMed

    Dotan, Raffy

    2012-06-01

    The multisession maximal lactate steady-state (MLSS) test is the gold standard for anaerobic threshold (AnT) estimation. However, it is highly impractical, requires high fitness level, and suffers additional shortcomings. Existing single-session AnT-estimating tests are of compromised validity, reliability, and resolution. The presented reverse lactate threshold test (RLT) is a single-session, AnT-estimating test, aimed at avoiding the pitfalls of existing tests. It is based on the novel concept of identifying blood lactate's maximal appearance-disappearance equilibrium by approaching the AnT from higher, rather than from lower exercise intensities. Rowing, cycling, and running case data (4 recreational and competitive athletes, male and female, aged 17-39 y) are presented. Subjects performed the RLT test and, on a separate session, a single 30-min MLSS-type verification test at the RLT-determined intensity. The RLT and its MLSS verification exhibited exceptional agreement at 0.5% discrepancy or better. The RLT's training sensitivity was demonstrated by a case of 2.5-mo training regimen following which the RLT's 15-W improvement was fully MLSS-verified. The RLT's test-retest reliability was examined in 10 trained and untrained subjects. Test 2 differed from test 1 by only 0.3% with an intraclass correlation of 0.997. The data suggest RLT to accurately and reliably estimate AnT (as represented by MLSS verification) with high resolution and in distinctly different sports and to be sensitive to training adaptations. Compared with MLSS, the single-session RLT is highly practical and its lower fitness requirements make it applicable to athletes and untrained individuals alike. Further research is needed to establish RLT's validity and accuracy in larger samples.

  10. How Radiation Oncologists Evaluate and Incorporate Life Expectancy Estimates Into the Treatment of Palliative Cancer Patients: A Survey-Based Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, Yolanda D., E-mail: ydtseng@partners.org; Krishnan, Monica S.; Sullivan, Adam J.

    2013-11-01

    Purpose: We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations. Methods and Materials: A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate whenmore » within the 95% confidence interval of median survival estimates from an established prognostic model. Results: Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%-97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model. Conclusions: Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed.« less

  11. Sensory Processing Relates to Attachment to Childhood Comfort Objects of College Students

    ERIC Educational Resources Information Center

    Kalpidou, Maria

    2012-01-01

    The author tested the hypothesis that attachment to comfort objects is based on the sensory processing characteristics of the individual. Fifty-two undergraduate students with and without a childhood comfort object reported sensory responses and performed a tactile threshold task. Those with a comfort object described their object and rated their…

  12. Relationship between compliance and persistence with osteoporosis medications and fracture risk in primary health care in France: a retrospective case-control analysis.

    PubMed

    Cotté, François-Emery; Mercier, Florence; De Pouvourville, Gérard

    2008-12-01

    Nonadherence to treatment is an important determinant of long-term outcomes in women with osteoporosis. This study was conducted to investigate the association between adherence and osteoporotic fracture risk and to identify optimal thresholds for good compliance and persistence. A secondary objective was to perform a preliminary evaluation of the cost consequences of adherence. This was a retrospective case-control analysis. Data were derived from the Thales prescription database, which contains information on >1.6 million patients in the primary health care setting in France. Cases were women aged >or=50 years who had an osteoporosis-related fracture in 2006. For each case, 5 matched controls were randomly selected. Both compliance and persistence aspects of treatment adherence were examined. Compliance was estimated based on the medication possession ratio (MPR). Persistence was calculated as the time from the initial filling of a prescription for osteoporosis medication until its discontinuation. The mean (SD) MPR was lower in cases compared with controls (58.8% [34.7%] vs 72.1% [28.8%], respectively; P < 0.001). Cases were more likely than controls to discontinue osteoporosis treatment (50.0% vs 25.3%; P < 0.001), yielding a significantly lower proportion of patients who were still persistent at 1 year (34.1% vs 40.9%; P < 0.001). MPR was the best predictor of fracture risk, with an area under the receiver-operating-characteristic curve that was higher than that for persistence (0.59 vs 0.55). The optimal MPR threshold for predicting fracture risk was >or=68.0%. Compared with less-compliant women, women who achieved this threshold had a 51% reduction in fracture risk. The difference in annual drug expenditure between women achieving this threshold and those who did not was approximately euro300. The optimal threshold for persistence with therapy was at least 6 months. Attaining this threshold was associated with a 28% reduction in fracture risk compared with less-persistent women. In this study, better treatment adherence was associated with a greater reduction in fracture risk. Compliance appeared to predict fracture risk better than did persistence.

  13. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    PubMed

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  14. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  15. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  16. Rural–Urban Differences in Objective and Subjective Measures of Physical Activity: Findings From the National Health and Nutrition Examination Survey (NHANES) 2003–2006

    PubMed Central

    Wen, Ming; Kowaleski-Jones, Lori

    2014-01-01

    Introduction Lower levels of physical activity among rural relative to urban residents have been suggested as an important contributor to rural–urban health disparity; however, empirical evidence is sparse. Methods We examined rural–urban differences in 4 objective physical activity measures (2 intensity thresholds by 2 bout lengths) and 4 subjective measures (total, leisure, household, and transportation) in a nationally representative sample of participants in the National Health and Nutrition Examination Survey (NHANES) 2003–2006. The sample comprised 5,056 adults aged 20 to 75 years. Rural-Urban Commuting Area (RUCA) codes were matched with NHANES subjects to identify urban status and 2 types of rural status. Rural–urban and within–rural differences in physical activity were estimated without and with controls for demographic and socioeconomic variables. Results Rural residents were less active than urban residents in high-intensity long bout (2,020 counts per minute threshold and 10 miniutes or longer bout length) accelerometer-measured physical activity (42.5 ± 6.2 min/wk vs 55.9 ± 2.8 min/wk) but the difference disappeared with a lower-intensity threshold (760 counts per minute). Rural residents reported more total physical activity than urban residents (438.3 ± 35.3min/wk vs 371.2 ± 12.5 min/wk), with differences primarily attributable to household physical activity. Within rural areas, micropolitan residents were less active than residents in smaller rural areas. Controlling for other variables reduced the size of the differences. Conclusion The direction and significance of rural–urban difference in physical activity varied by the method of physical activity measurement, likely related to rural residents spending more time in low-intensity household physical activity but less time in high-intensity physical activity. Micropolitan residents were substantially less active than residents in smaller rural areas, indicating that physical activity did not vary unidirectionally with degree of urbanization. PMID:25144676

  17. Rural-urban differences in objective and subjective measures of physical activity: findings from the National Health and Nutrition Examination Survey (NHANES) 2003-2006.

    PubMed

    Fan, Jessie X; Wen, Ming; Kowaleski-Jones, Lori

    2014-08-21

    Lower levels of physical activity among rural relative to urban residents have been suggested as an important contributor to rural-urban health disparity; however, empirical evidence is sparse. We examined rural-urban differences in 4 objective physical activity measures (2 intensity thresholds by 2 bout lengths) and 4 subjective measures (total, leisure, household, and transportation) in a nationally representative sample of participants in the National Health and Nutrition Examination Survey (NHANES) 2003-2006. The sample comprised 5,056 adults aged 20 to 75 years. Rural-Urban Commuting Area (RUCA) codes were matched with NHANES subjects to identify urban status and 2 types of rural status. Rural-urban and within-rural differences in physical activity were estimated without and with controls for demographic and socioeconomic variables. Rural residents were less active than urban residents in high-intensity long bout (2,020 counts per minute threshold and 10 miniutes or longer bout length) accelerometer-measured physical activity (42.5 ± 6.2 min/wk vs 55.9 ± 2.8 min/wk) but the difference disappeared with a lower-intensity threshold (760 counts per minute). Rural residents reported more total physical activity than urban residents (438.3 ± 35.3min/wk vs 371.2 ± 12.5 min/wk), with differences primarily attributable to household physical activity. Within rural areas, micropolitan residents were less active than residents in smaller rural areas. Controlling for other variables reduced the size of the differences. The direction and significance of rural-urban difference in physical activity varied by the method of physical activity measurement, likely related to rural residents spending more time in low-intensity household physical activity but less time in high-intensity physical activity. Micropolitan residents were substantially less active than residents in smaller rural areas, indicating that physical activity did not vary unidirectionally with degree of urbanization.

  18. Estimating the effect of a rare time-dependent treatment on the recurrent event rate.

    PubMed

    Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E

    2018-05-30

    In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.

  19. A Hierarchical Object-oriented Urban Land Cover Classification Using WorldView-2 Imagery and Airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.

    2016-11-01

    In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.

  20. Objective definition of rainfall intensity-duration thresholds for post-fire flash floods and debris flows in the area burned by the Waldo Canyon fire, Colorado, USA

    USGS Publications Warehouse

    Staley, Dennis M.; Gartner, Joseph E.; Kean, Jason W.

    2015-01-01

    We present an objectively defined rainfall intensity-duration (I-D) threshold for the initiation of flash floods and debris flows for basins recently burned in the 2012 Waldo Canyon fire near Colorado Springs, Colorado, USA. Our results are based on 453 rainfall records which include 8 instances of hazardous flooding and debris flow from 10 July 2012 to 14 August 2013. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow or flood occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. The equation I = 11.6D−0.7 represents the I-D threshold (I, in mm/h) for durations (D, in hours) ranging from 0.083 h (5 min) to 1 h for basins burned by the 2012 Waldo Canyon fire. As periods of high-intensity rainfall over short durations (less than 1 h) produced all of the debris flow and flood events, real-time monitoring of rainfall conditions will result in very short lead times for early-warning. Our results highlight the need for improved forecasting of the rainfall rates during short-duration, high-intensity convective rainfall events.

  1. Aging and the discrimination of object weight.

    PubMed

    Norman, J Farley; Norman, Hideko F; Swindle, Jessica M; Jennings, L RaShae; Bartholomew, Ashley N

    2009-01-01

    A single experiment was carried out to evaluate the ability of younger and older observers to discriminate object weights. A 2-alternative forced-choice variant of the method of constant stimuli was used to obtain difference thresholds for lifted weight for twelve younger (mean age = 21.5 years) and twelve older (mean age = 71.3 years) adults. The standard weight was 100 g, whereas the test weights ranged from 85 to 115 g. The difference thresholds of the older observers were 57.6% higher than those of the younger observers: the average difference thresholds were 10.4% and 6.6% of the standard for the older and younger observers, respectively. The current findings of an age-related deterioration in the ability to discriminate lifted weight extend and disambiguate the results of earlier research.

  2. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  3. A simple method to estimate threshold friction velocity of wind erosion in the field

    USDA-ARS?s Scientific Manuscript database

    Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...

  4. Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.

    PubMed

    Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2017-10-01

    OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.

  5. Neurology objective structured clinical examination reliability using generalizability theory.

    PubMed

    Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R

    2015-11-03

    This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.

  6. Detecting plague-host abundance from space: Using a spectral vegetation index to identify occupancy of great gerbil burrows

    NASA Astrophysics Data System (ADS)

    Wilschut, Liesbeth I.; Heesterbeek, Johan A. P.; Begon, Mike; de Jong, Steven M.; Ageyev, Vladimir; Laudisoit, Anne; Addink, Elisabeth A.

    2018-02-01

    In Kazakhstan, plague outbreaks occur when its main host, the great gerbil, exceeds an abundance threshold. These live in family groups in burrows, which can be mapped using remote sensing. Occupancy (percentage of burrows occupied) is a good proxy for abundance and hence the possibility of an outbreak. Here we use time series of satellite images to estimate occupancy remotely. In April and September 2013, 872 burrows were identified in the field as either occupied or empty. For satellite images acquired between April and August, 'burrow objects' were identified and matched to the field burrows. The burrow objects were represented by 25 different polygon types, then classified (using a majority vote from 10 Random Forests) as occupied or empty, using Normalized Difference Vegetation Indices (NDVI) calculated for all images. Throughout the season NDVI values were higher for empty than for occupied burrows. Occupancy status of individual burrows that were continuously occupied or empty, was classified with producer's and user's accuracy values of 63 and 64% for the optimum polygon. Occupancy level was predicted very well and differed 2% from the observed occupancy. This establishes firmly the principle that occupancy can be estimated using satellite images with the potential to predict plague outbreaks over extensive areas with much greater ease and accuracy than previously.

  7. A Bayesian hierarchical model for mortality data from cluster-sampling household surveys in humanitarian crises.

    PubMed

    Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko

    2018-05-31

    The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.

  8. Evaluation of a threshold-based model of fatigue in gamma titanium aluminide following impact damage

    NASA Astrophysics Data System (ADS)

    Harding, Trevor Scott

    2000-10-01

    Recent interest in gamma titanium aluminide (gamma-TiAl) for use in gas turbine engine applications has centered on the low density and good elevated temperature strength retention of gamma-TiAl compared to current materials. However, the relatively low ductility and fracture toughness of gamma-TiAl leads to serious concerns regarding its ability to resist impact damage. Furthermore, the limited fatigue crack growth resistance of gamma-TiAl means that the potential for fatigue failures resulting from impact damage is real if a damage tolerant design approach is used. A threshold-based design approach may be required if fatigue crack growth from potential impact sites is to be avoided. The objective of the present research is to examine the feasibility of a threshold-based approach for the design of a gamma-TiAl low-pressure turbine blade subjected to both assembly-related impact damage and foreign object damage. Specimens of three different gamma-TiAl alloys were damaged in such a way as to simulate anticipated impact damage for a turbine blade. Step-loading fatigue tests were conducted at both room temperature and 600°C. In terms of the assembly-related impact damage, the results indicate that there is reasonably good agreement between the threshold-based predictions of the fatigue strength of damaged specimens and the measured data. However, some discrepancies do exist. In the case of very lightly damaged specimens, prediction of the resulting fatigue strength requires that a very conservative small-crack fatigue threshold be used. Consequently, the allowable design conditions are significantly reduced. For severely damaged specimens, an analytical approach found that the potential effects of residual stresses may be related to the discrepancies observed between the threshold-based model and measured fatigue strength data. In the case of foreign object damage, a good correlation was observed between impacts resulting in large cracks and a long-crack threshold-based approximation of the fatigue strength. However, in the case of smaller impact sites, a lower small-crack threshold appears to be more appropriate. In some cases, a complete perforation of the material, or blowout, would result from the impact. Prediction of the reduction in fatigue strength resulting from this form of damage required the use of a stress concentration factor, rather than a threshold-based prediction.

  9. A new function for estimating local rainfall thresholds for landslide triggering

    NASA Astrophysics Data System (ADS)

    Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.

    2009-04-01

    The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.

  10. Theoretical and Experimental Investigation of Particle Trapping via Acoustic Bubbles

    NASA Astrophysics Data System (ADS)

    Chen, Yun; Fang, Zecong; Merritt, Brett; Saadat-Moghaddam, Darius; Strack, Dillon; Xu, Jie; Lee, Sungyon

    2014-11-01

    One important application of lab-on-a-chip devices is the trapping and sorting of micro-objects, with acoustic bubbles emerging as an effective, non-contact method. Acoustically actuated bubbles are known to exert a secondary radiation force on micro-particles and trap them, when this radiation force exceeds the drag force that acts to keep the particles in motion. In this study, we theoretically evaluate the magnitudes of these two forces for varying actuation frequencies and voltages. In particular, the secondary radiation force is calculated directly from bubble oscillation shapes that have been experimentally measured for varying acoustic parameters. Finally, based on the force estimates, we predict the threshold voltage and frequency for trapping and compare them to the experimental results.

  11. Nut crop yield records show that budbreak-based chilling requirements may not reflect yield decline chill thresholds

    NASA Astrophysics Data System (ADS)

    Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.

    2015-06-01

    Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.

  12. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  13. Measurement of visual contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Vongierke, H. E.; Marko, A. R.

    1985-04-01

    This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.

  14. Threshold altitude resulting in decompression sickness

    NASA Technical Reports Server (NTRS)

    Kumar, K. V.; Waligora, James M.; Calkins, Dick S.

    1990-01-01

    A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.

  15. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples

    EPA Science Inventory

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...

  16. Attentional limits on the perception and memory of visual information.

    PubMed

    Palmer, J

    1990-05-01

    Attentional limits on perception and memory were measured by the decline in performance with increasing numbers of objects in a display. Multiple objects were presented to Ss who discriminated visual attributes. In a representative condition, 4 lines were briefly presented followed by a single line in 1 of the same locations. Ss were required to judge if the single line in the 2nd display was longer or shorter than the line in the corresponding location of the 1st display. The length difference threshold was calculated as a function of the number of objects. The difference thresholds doubled when the number of objects was increased from 1 to 4. This effect was generalized in several ways, and nonattentional explanations were ruled out. Further analyses showed that the attentional processes must share information from at least 4 objects and can be described by a simple model.

  17. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  18. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  19. PROUCL 4.0 SOFTWARE

    EPA Science Inventory

    Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...

  20. UWB pulse detection and TOA estimation using GLRT

    NASA Astrophysics Data System (ADS)

    Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.

    2017-12-01

    In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.

  1. Gas Composition Sensing Using Carbon Nanotube Arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing; Meyyappan, Meyya

    2012-01-01

    This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.

  2. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  3. At what costs will screening with CT colonography be competitive? A cost-effectiveness approach.

    PubMed

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Zauber, Ann G; Boer, Rob; Wilschut, Janneke; Habbema, J Dik F

    2009-03-01

    The costs of computed tomographic colonography (CTC) are not yet established for screening use. In our study, we estimated the threshold costs for which CTC screening would be a cost-effective alternative to colonoscopy for colorectal cancer (CRC) screening in the general population. We used the MISCAN-colon microsimulation model to estimate the costs and life-years gained of screening persons aged 50-80 years for 4 screening strategies: (i) optical colonoscopy; and CTC with referral to optical colonoscopy of (ii) any suspected polyp; (iii) a suspected polyp >or=6 mm and (iv) a suspected polyp >or=10 mm. For each of the 4 strategies, screen intervals of 5, 10, 15 and 20 years were considered. Subsequently, for each CTC strategy and interval, the threshold costs of CTC were calculated. We performed a sensitivity analysis to assess the effect of uncertain model parameters on the threshold costs. With equal costs ($662), optical colonoscopy dominated CTC screening. For CTC to gain similar life-years as colonoscopy screening every 10 years, it should be offered every 5 years with referral of polyps >or=6 mm. For this strategy to be as cost-effective as colonoscopy screening, the costs must not exceed $285 or 43% of colonoscopy costs (range in sensitivity analysis: 39-47%). With 25% higher adherence than colonoscopy, CTC threshold costs could be 71% of colonoscopy costs. Our estimate of 43% is considerably lower than previous estimates in literature, because previous studies only compared CTC screening to 10-yearly colonoscopy, where we compared to different intervals of colonoscopy screening.

  4. Cost-effectiveness of bone densitometry among Caucasian women and men without a prior fracture according to age and body weight.

    PubMed

    Schousboe, J T; Gourlay, M; Fink, H A; Taylor, B C; Orwoll, E S; Barrett-Connor, E; Melton, L J; Cummings, S R; Ensrud, K E

    2013-01-01

    We used a microsimulation model to estimate the threshold body weights at which screening bone densitometry is cost-effective. Among women aged 55-65 years and men aged 55-75 years without a prior fracture, body weight can be used to identify those for whom bone densitometry is cost-effective. Bone densitometry may be more cost-effective for those with lower body weight since the prevalence of osteoporosis is higher for those with low body weight. Our purpose was to estimate weight thresholds below which bone densitometry is cost-effective for women and men without a prior clinical fracture at ages 55, 60, 65, 75, and 80 years. We used a microsimulation model to estimate the costs and health benefits of bone densitometry and 5 years of fracture prevention therapy for those without prior fracture but with femoral neck osteoporosis (T-score ≤ -2.5) and a 10-year hip fracture risk of ≥3%. Threshold pre-test probabilities of low BMD warranting drug therapy at which bone densitometry is cost-effective were calculated. Corresponding body weight thresholds were estimated using data from the Study of Osteoporotic Fractures (SOF), the Osteoporotic Fractures in Men (MrOS) study, and the National Health and Nutrition Examination Survey (NHANES) for 2005-2006. Assuming a willingness to pay of $75,000 per quality adjusted life year (QALY) and drug cost of $500/year, body weight thresholds below which bone densitometry is cost-effective for those without a prior fracture were 74, 90, and 100 kg, respectively, for women aged 55, 65, and 80 years; and were 67, 101, and 108 kg, respectively, for men aged 55, 75, and 80 years. For women aged 55-65 years and men aged 55-75 years without a prior fracture, body weight can be used to select those for whom bone densitometry is cost-effective.

  5. A general theoretical framework for interpreting patient-reported outcomes estimated from ordinally scaled item responses.

    PubMed

    Massof, Robert W

    2014-10-01

    A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  6. Foreign object detection and removal to improve automated analysis of chest radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The methodmore » is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.« less

  7. Setting objective thresholds for rare event detection in flow cytometry

    PubMed Central

    Richards, Adam J.; Staats, Janet; Enzor, Jennifer; McKinnon, Katherine; Frelinger, Jacob; Denny, Thomas N.; Weinhold, Kent J.; Chan, Cliburn

    2014-01-01

    The accurate identification of rare antigen-specific cytokine positive cells from peripheral blood mononuclear cells (PBMC) after antigenic stimulation in an intracellular staining (ICS) flow cytometry assay is challenging, as cytokine positive events may be fairly diffusely distributed and lack an obvious separation from the negative population. Traditionally, the approach by flow operators has been to manually set a positivity threshold to partition events into cytokine-positive and cytokine-negative. This approach suffers from subjectivity and inconsistency across different flow operators. The use of statistical clustering methods does not remove the need to find an objective threshold between between positive and negative events since consistent identification of rare event subsets is highly challenging for automated algorithms, especially when there is distributional overlap between the positive and negative events (“smear”). We present a new approach, based on the Fβ measure, that is similar to manual thresholding in providing a hard cutoff, but has the advantage of being determined objectively. The performance of this algorithm is compared with results obtained by expert visual gating. Several ICS data sets from the External Quality Assurance Program Oversight Laboratory (EQAPOL) proficiency program were used to make the comparisons. We first show that visually determined thresholds are difficult to reproduce and pose a problem when comparing results across operators or laboratories, as well as problems that occur with the use of commonly employed clustering algorithms. In contrast, a single parameterization for the Fβ method performs consistently across different centers, samples, and instruments because it optimizes the precision/recall tradeoff by using both negative and positive controls. PMID:24727143

  8. Psychometric considerations in the measurement of event-related brain potentials: Guidelines for measurement and reporting.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  10. Dispersive estimates for massive Dirac operators in dimension two

    NASA Astrophysics Data System (ADS)

    Erdoğan, M. Burak; Green, William R.; Toprak, Ebru

    2018-05-01

    We study the massive two dimensional Dirac operator with an electric potential. In particular, we show that the t-1 decay rate holds in the L1 →L∞ setting if the threshold energies are regular. We also show these bounds hold in the presence of s-wave resonances at the threshold. We further show that, if the threshold energies are regular then a faster decay rate of t-1(log ⁡ t) - 2 is attained for large t, at the cost of logarithmic spatial weights. The free Dirac equation does not satisfy this bound due to the s-wave resonances at the threshold energies.

  11. Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks

    NASA Technical Reports Server (NTRS)

    Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris

    2015-01-01

    Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.

  12. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  13. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  14. Implementation guide for turbidity threshold sampling: principles, procedures, and analysis

    Treesearch

    Jack Lewis; Rand Eads

    2009-01-01

    Turbidity Threshold Sampling uses real-time turbidity and river stage information to automatically collect water quality samples for estimating suspended sediment loads. The system uses a programmable data logger in conjunction with a stage measurement device, a turbidity sensor, and a pumping sampler. Specialized software enables the user to control the sampling...

  15. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China.

    PubMed

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-02-21

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China . 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research.

  16. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China

    PubMed Central

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-01-01

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China. 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research. PMID:28230810

  17. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    PubMed

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua D.; Hartse, Hans

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  1. Estimation of signal coherence threshold and concealed spectral lines applied to detection of turbofan engine combustion noise.

    PubMed

    Miles, Jeffrey Hilton

    2011-05-01

    Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise.

  2. Variation in the Hearing Threshold in Women during the Menstrual Cycle

    PubMed Central

    Souza, Dayse da Silva; Luckwu, Brunna; Andrade, Wagner Teobaldo Lopes de; Pessoa, Luciane Spinelli de Figueiredo; Nascimento, João Agnaldo do; Rosa, Marine Raquel Diniz da

    2017-01-01

    Introduction  The hormonal changes that occur during the menstrual cycle and their relationship with hearing problems have been studied. However, they have not been well explained. Objective  The objective of our study is to investigate the variation in hearing thresholds in women during the menstrual cycle. Method  We conducted a cohort and longitudinal study. It was composed of 30 volunteers, aged 18–39 years old, of which 20 were women during the phases of the menstrual cycle and 10 were men (control group) who underwent audiometry and impedance exams, to correlate the possible audiological changes in each phase of the menstrual cycle. Results  There were significant changes in hearing thresholds observed during the menstrual cycle phases in the group of women who used hormonal contraceptives and the group who did not use such contraceptives. Improved hearing thresholds were observed in the late follicular phase in the group who did not use hormonal contraceptives and the hearing thresholds at high frequencies were better. Throughout the menstrual cycle phases, the mean variation was 3.6 db HL between weeks in the group who used hormonal contraceptives and 4.09 db HL in the group who did not use them. Conclusions  The present study found that there may be a relationship between hearing changes and hormonal fluctuations during the menstrual cycle based on changes in the hearing thresholds of women. In addition, this study suggests that estrogen has an otoprotective effect on hearing, since the best hearing thresholds were found when estrogen was at its maximum peak. PMID:29018493

  3. Variation in the Hearing Threshold in Women during the Menstrual Cycle.

    PubMed

    Souza, Dayse da Silva; Luckwu, Brunna; Andrade, Wagner Teobaldo Lopes de; Pessoa, Luciane Spinelli de Figueiredo; Nascimento, João Agnaldo do; Rosa, Marine Raquel Diniz da

    2017-10-01

    Introduction  The hormonal changes that occur during the menstrual cycle and their relationship with hearing problems have been studied. However, they have not been well explained. Objective  The objective of our study is to investigate the variation in hearing thresholds in women during the menstrual cycle. Method  We conducted a cohort and longitudinal study. It was composed of 30 volunteers, aged 18-39 years old, of which 20 were women during the phases of the menstrual cycle and 10 were men (control group) who underwent audiometry and impedance exams, to correlate the possible audiological changes in each phase of the menstrual cycle. Results  There were significant changes in hearing thresholds observed during the menstrual cycle phases in the group of women who used hormonal contraceptives and the group who did not use such contraceptives. Improved hearing thresholds were observed in the late follicular phase in the group who did not use hormonal contraceptives and the hearing thresholds at high frequencies were better. Throughout the menstrual cycle phases, the mean variation was 3.6 db HL between weeks in the group who used hormonal contraceptives and 4.09 db HL in the group who did not use them. Conclusions  The present study found that there may be a relationship between hearing changes and hormonal fluctuations during the menstrual cycle based on changes in the hearing thresholds of women. In addition, this study suggests that estrogen has an otoprotective effect on hearing, since the best hearing thresholds were found when estrogen was at its maximum peak.

  4. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  5. A study on the effect of prolonged mobile phone use on pure tone audiometry thresholds of medical students of Sikkim

    PubMed Central

    Das, S; Chakraborty, S; Mahanta, B

    2017-01-01

    Introduction: Mobile phones have become indispensable for daily activities, and people are exposed to them from an early age. There is, however, concern about the harmful effect of the electromagnetic radiation emitted from the mobile phones. Objective: The objective of the study was to study the effect of mobile phone on average pure tone audiometry (PTA) threshold of the person and to study the changes in the pure tone threshold at high frequencies such as 2 kHz, 4 kHz, and 8 kHz among the students with prolonged exposure to mobile phones. Methodology: A cross-sectional study was conducted among the medical students who have been using mobile phones for the past 5 years. The effect of mobile phones on the PTA threshold in the exposed ear and the nonexposed ear was assessed. Results: The study shows that there is a significant difference in average air conduction (AC) and bone conduction (BC) hearing threshold among the exposed and the nonexposed ears (P < 0.05). A significant rise of both AC and BC threshold at individual frequencies between the exposed and the nonexposed ear is also noted in this study. Conclusion: The study conducted shows changes in the hearing threshold of the exposed ear when compared with the nonexposed ear. There are however lot of unanswered questions which provide an interesting avenue for further research. Till concrete evidence is available the only feasible way to control its exposure is to limit the duration of usage of mobile phones. PMID:28272071

  6. Metal Artifact Reduction in X-ray Computed Tomography Using Computer-Aided Design Data of Implants as Prior Information.

    PubMed

    Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A

    2017-06-01

    The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.

  7. Comparison between changes in flood hazard and risk in Spain using historical information

    NASA Astrophysics Data System (ADS)

    Llasat, Maria-Carmen; Mediero, Luis; Garrote, Luis; Gilabert, Joan

    2015-04-01

    Recently, the COST Action ES0901 "European procedures for flood frequency estimation (FloodFreq)" had as objective "the comparison and evaluation of methods for flood frequency estimation under the various climatologic and geographic conditions found in Europe". It was highlighted the improvement of regional analyses on at-site estimates, in terms of the uncertainty of quantile estimates. In the case of Spain, a regional analysis was carried out at a national scale, which allows identifying the flow threshold corresponding to a given return period from the observed flow series recorded at a gauging station. In addition, Mediero et al. (2014) studied the possible influence of non-stationarity on flood series for the period 1942-2009. In parallel, Barnolas and Llasat (2007), among others, collected documentary information of catastrophic flood events in Spain for the last centuries. Traditionally, the first approach ("top-down") usually identifies a flood as catastrophic, when its exceeds the 500-year return period flood. However, the second one ("bottom-up approach") accounts for flood damages (Llasat et al, 2005). This study presents a comparison between both approaches, discussing the potential factors that can lead to discrepancies between them, as well as accounting for information about major changes experienced in the catchment that could lead to changes in flood hazard and risk.

  8. Cloud detection method for Chinese moderate high resolution satellite imagery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo

    2016-10-01

    Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.

  9. Postnatal gestational age estimation using newborn screening blood spots: a proposed validation protocol

    PubMed Central

    Murphy, Malia S Q; Hawken, Steven; Atkinson, Katherine M; Milburn, Jennifer; Pervin, Jesmin; Gravett, Courtney; Stringer, Jeffrey S A; Rahman, Anisur; Lackritz, Eve; Chakraborty, Pranesh; Wilson, Kumanan

    2017-01-01

    Background Knowledge of gestational age (GA) is critical for guiding neonatal care and quantifying regional burdens of preterm birth. In settings where access to ultrasound dating is limited, postnatal estimates are frequently used despite the issues of accuracy associated with postnatal approaches. Newborn metabolic profiles are known to vary by severity of preterm birth. Recent work by our group and others has highlighted the accuracy of postnatal GA estimation algorithms derived from routinely collected newborn screening profiles. This protocol outlines the validation of a GA model originally developed in a North American cohort among international newborn cohorts. Methods Our primary objective is to use blood spot samples collected from infants born in Zambia and Bangladesh to evaluate our algorithm’s capacity to correctly classify GA within 1, 2, 3 and 4 weeks. Secondary objectives are to 1) determine the algorithm's accuracy in small-for-gestational-age and large-for-gestational-age infants, 2) determine its ability to correctly discriminate GA of newborns across dichotomous thresholds of preterm birth (≤34 weeks, <37 weeks GA) and 3) compare the relative performance of algorithms derived from newborn screening panels including all available analytes and those restricted to analyte subsets. The study population will consist of infants born to mothers already enrolled in one of two preterm birth cohorts in Lusaka, Zambia, and Matlab, Bangladesh. Dried blood spot samples will be collected and sent for analysis in Ontario, Canada, for model validation. Discussion This study will determine the validity of a GA estimation algorithm across ethnically diverse infant populations and assess population specific variations in newborn metabolic profiles. PMID:29104765

  10. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  11. Cross-validation analysis for genetic evaluation models for ranking in endurance horses.

    PubMed

    García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I

    2018-01-01

    Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.

  12. Generating standardized image data for testing and calibrating quantification of volumes, surfaces, lengths, and object counts in fibrous and porous materials using X-ray microtomography.

    PubMed

    Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk

    2018-06-01

    Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.

  13. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  14. Reference guide to odor thresholds for hazardous air pollutants listed in the Clean Air Act amendments of 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.

    1992-03-01

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less

  15. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  16. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    PubMed

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  17. Multisampling suprathreshold perimetry: a comparison with conventional suprathreshold and full-threshold strategies by computer simulation.

    PubMed

    Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David

    2003-06-01

    To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.

  18. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    NASA Astrophysics Data System (ADS)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  19. Thermal sensitivity and cardiovascular reactivity to stress in healthy males.

    PubMed

    Conde-Guzón, Pablo Antonio; Bartolomé-Albistegui, María Teresa; Quirós, Pilar; Cabestrero, Raúl

    2011-11-01

    This paper examines the association of cardiovascular reactivity with thermal thresholds (detection and unpleasantness). Heart period (HP), systolic (SBP) and diastolic (DBP) blood pressure of 42 health young males were recorded during a cardiovascular reactivity task (a videogame based upon Sidman's avoidance paradigm). Thermal sensitivity, assessing detection and unpleasantness thresholds with radiant heat in the forearm was also estimated for participants. Participants with differential scores in the cardiovascular variables from base line to task > or = P65 were considered as reactors and those how have differential scores < or = P35 were considered as non-reactors. Significant differences were observed between groups in the unpleasantness thresholds in blood pressure (BP) but not in HP. Reactors exhibited significant higher unpleasantness thresholds than non-reactors. No significant differences were obtained in detection thresholds between groups.

  20. Vitamin D supplementation increases calcium absorption without a threshold effect

    USDA-ARS?s Scientific Manuscript database

    The maximal calcium absorption in response to vitamin D has been proposed as a biomarker for vitamin D sufficiency. Our objective was to determine whether there is a threshold beyond which increasing doses of vitamin D, or concentrations of serum 25-hydroxyvitamin D [25(OH)D], no longer increase cal...

Top