Sample records for objective threshold estimation

  1. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Estimating phonation threshold pressure.

    PubMed

    Fisher, K V; Swank, P R

    1997-10-01

    Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.

  3. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  4. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in

  5. Global gray-level thresholding based on object size.

    PubMed

    Ranefall, Petter; Wählby, Carolina

    2016-04-01

    In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  6. Improvements in estimating proportions of objects from multispectral data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Hyde, P. D.; Richardson, W.

    1974-01-01

    Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.

  7. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  8. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  9. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection

    NASA Astrophysics Data System (ADS)

    Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei

    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.

  11. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  12. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears

    USGS Publications Warehouse

    Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  13. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears.

    PubMed

    Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  14. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this

  15. Colour thresholding and objective quantification in bioimaging

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  16. Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.

    PubMed

    Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth

    2018-03-26

    When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.

  17. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  18. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  19. Dental age estimation: the role of probability estimates at the 10 year threshold.

    PubMed

    Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham

    2014-08-01

    The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.

  20. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  1. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  2. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  3. Utilizing Objective Drought Thresholds to Improve Drought Monitoring with the SPI

    NASA Astrophysics Data System (ADS)

    Leasor, Z. T.; Quiring, S. M.

    2017-12-01

    Drought is a prominent climatic hazard in the south-central United States. Droughts are frequently monitored using the severity categories determined by the U.S. Drought Monitor (USDM). This study uses the Standardized Precipitation Index (SPI) to conduct a drought frequency analysis across Texas, Oklahoma, and Kansas using PRISM precipitation data from 1900-2015. The SPI is shown to be spatiotemporally variant across the south-central United States. In particular, utilizing the default USDM severity thresholds may underestimate drought severity in arid regions. Objective drought thresholds were implemented by fitting a CDF to each location's SPI distribution. This approach results in a more homogeneous distribution of drought frequencies across each severity category. Results also indicate that it may be beneficial to develop objective drought thresholds for each season and SPI timescale. This research serves as a proof-of-concept and demonstrates how drought thresholds should be objectively developed so that they are appropriate for each climatic region.

  4. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  5. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  6. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  7. On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?

    PubMed

    Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro

    2016-01-01

    Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Estimation of the geochemical threshold and its statistical significance

    USGS Publications Warehouse

    Miesch, A.T.

    1981-01-01

    A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.

  9. Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Pang; Yu, Yue

    2017-05-01

    This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..

  10. Genetic parameters for hoof health traits estimated with linear and threshold models using alternative cohorts.

    PubMed

    Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F

    2017-04-01

    A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding

  11. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Setting objective thresholds for rare event detection in flow cytometry

    PubMed Central

    Richards, Adam J.; Staats, Janet; Enzor, Jennifer; McKinnon, Katherine; Frelinger, Jacob; Denny, Thomas N.; Weinhold, Kent J.; Chan, Cliburn

    2014-01-01

    The accurate identification of rare antigen-specific cytokine positive cells from peripheral blood mononuclear cells (PBMC) after antigenic stimulation in an intracellular staining (ICS) flow cytometry assay is challenging, as cytokine positive events may be fairly diffusely distributed and lack an obvious separation from the negative population. Traditionally, the approach by flow operators has been to manually set a positivity threshold to partition events into cytokine-positive and cytokine-negative. This approach suffers from subjectivity and inconsistency across different flow operators. The use of statistical clustering methods does not remove the need to find an objective threshold between between positive and negative events since consistent identification of rare event subsets is highly challenging for automated algorithms, especially when there is distributional overlap between the positive and negative events (“smear”). We present a new approach, based on the Fβ measure, that is similar to manual thresholding in providing a hard cutoff, but has the advantage of being determined objectively. The performance of this algorithm is compared with results obtained by expert visual gating. Several ICS data sets from the External Quality Assurance Program Oversight Laboratory (EQAPOL) proficiency program were used to make the comparisons. We first show that visually determined thresholds are difficult to reproduce and pose a problem when comparing results across operators or laboratories, as well as problems that occur with the use of commonly employed clustering algorithms. In contrast, a single parameterization for the Fβ method performs consistently across different centers, samples, and instruments because it optimizes the precision/recall tradeoff by using both negative and positive controls. PMID:24727143

  13. Selecting the best tone-pip stimulus-envelope time for estimating an objective middle-latency response threshold for low- and middle-tone sensorineural hearing losses.

    PubMed

    Xu, Z M; De Vel, E; Vinck, B; Van Cauwenberge, P

    1995-01-01

    The effects of rise-fall and plateau times for the Pa component of the middle-latency response (MLR) were investigated in normally hearing subjects, and an objective MLR threshold was measured in patients with low- and middle-tone hearing losses, using a selected stimulus-envelope time. Our results showed that the stimulus-envelope time (the rise-fall time and plateau time groups) affected the Pa component of the MLR (quality was determined by the (chi 2-test and amplitude by the F-test). The 4-2-4 tone-pips produced good Pa quality by visual inspection. However, our data revealed no statistically significant Na-Pa amplitude differences between the two subgroups studied when comparing the 2- and 4-ms rise-fall times and the 0- and 2-ms plateau times. In contrast, Na-Pa became significantly smaller from the 4-ms to the 6-ms rise-fall time and from the 2-ms to the 4-ms plateau time (paired t-test). This result allowed us to select the 2- or 4-ms rise-fall time and the 0- or 2-ms plateau time without influencing amplitude. Analysis of the stimulus spectral characteristics demonstrated that a rise-fall time of at least 2ms could prevent spectral splatter and indicated that a stimulus with a 5-ms rise-fall time had a greater frequency-specificity than a stimulus of 2-ms rise-fall time. When considering the synchronous discharge and frequency-specificity of MLR, our findings show that a rise-fall time of four periods with a plateau of two periods is an acceptable compromise for estimating the objective MLR threshold.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    DOE PAGES

    Burr, Tom; Hamada, Michael S.; Howell, John; ...

    2013-01-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less

  15. The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease.

    PubMed

    Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly

    2016-12-01

    To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.

  16. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  17. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  18. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  19. Optimal threshold estimation for binary classifiers using game theory.

    PubMed

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  20. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  1. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  2. Comparison of a field-based test to estimate functional threshold power and power output at lactate threshold.

    PubMed

    Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C

    2012-02-01

    It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).

  3. A new function for estimating local rainfall thresholds for landslide triggering

    NASA Astrophysics Data System (ADS)

    Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.

    2009-04-01

    The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is

  4. Lowering thresholds for speed limit enforcement impairs peripheral object detection and increases driver subjective workload.

    PubMed

    Bowden, Vanessa K; Loft, Shayne; Tatasciore, Monica; Visser, Troy A W

    2017-01-01

    Speed enforcement reduces incidences of speeding, thus reducing traffic accidents. Accordingly, it has been argued that stricter speed enforcement thresholds could further improve road safety. Effective speed monitoring however requires driver attention and effort, and human information-processing capacity is limited. Emphasizing speed monitoring may therefore reduce resource availability for other aspects of safe vehicle operation. We investigated whether lowering enforcement thresholds in a simulator setting would introduce further competition for limited cognitive and visual resources. Eighty-four young adult participants drove under conditions where they could be fined for travelling 1, 6, or 11km/h over a 50km/h speed-limit. Stricter speed enforcement led to greater subjective workload and significant decrements in peripheral object detection. These data indicate that the benefits of reduced speeding with stricter enforcement may be at least partially offset by greater mental demands on drivers, reducing their responses to safety-critical stimuli on the road. It is likely these results under-estimate the impact of stricter speed enforcement on real-world drivers who experience significantly greater pressures to drive at or above the speed limit. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  6. A novel approach to estimation of the time to biomarker threshold: applications to HIV.

    PubMed

    Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc

    2016-11-01

    In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold.

    PubMed

    Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark

    2015-02-01

    Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability

  8. Influence of drug load on dissolution behavior of tablets containing a poorly water-soluble drug: estimation of the percolation threshold.

    PubMed

    Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó

    2017-08-01

    Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.

  9. Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST.

    PubMed

    Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger

    2018-05-10

    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.

  10. Estimation of Effect Thresholds for the Development of Water Quality Criteria

    EPA Science Inventory

    Biological and ecological effect thresholds can be used for determining safe levels of nontraditional stressors. The U.S. EPA Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (WQC) [36] uses a risk assessment approach to estimate effect thre...

  11. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  12. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  13. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  14. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and

  15. Comparability of children's sedentary time estimates derived from wrist worn GENEActiv and hip worn ActiGraph accelerometer thresholds.

    PubMed

    Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J

    2018-03-28

    To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  16. Reverse lactate threshold: a novel single-session approach to reliable high-resolution estimation of the anaerobic threshold.

    PubMed

    Dotan, Raffy

    2012-06-01

    The multisession maximal lactate steady-state (MLSS) test is the gold standard for anaerobic threshold (AnT) estimation. However, it is highly impractical, requires high fitness level, and suffers additional shortcomings. Existing single-session AnT-estimating tests are of compromised validity, reliability, and resolution. The presented reverse lactate threshold test (RLT) is a single-session, AnT-estimating test, aimed at avoiding the pitfalls of existing tests. It is based on the novel concept of identifying blood lactate's maximal appearance-disappearance equilibrium by approaching the AnT from higher, rather than from lower exercise intensities. Rowing, cycling, and running case data (4 recreational and competitive athletes, male and female, aged 17-39 y) are presented. Subjects performed the RLT test and, on a separate session, a single 30-min MLSS-type verification test at the RLT-determined intensity. The RLT and its MLSS verification exhibited exceptional agreement at 0.5% discrepancy or better. The RLT's training sensitivity was demonstrated by a case of 2.5-mo training regimen following which the RLT's 15-W improvement was fully MLSS-verified. The RLT's test-retest reliability was examined in 10 trained and untrained subjects. Test 2 differed from test 1 by only 0.3% with an intraclass correlation of 0.997. The data suggest RLT to accurately and reliably estimate AnT (as represented by MLSS verification) with high resolution and in distinctly different sports and to be sensitive to training adaptations. Compared with MLSS, the single-session RLT is highly practical and its lower fitness requirements make it applicable to athletes and untrained individuals alike. Further research is needed to establish RLT's validity and accuracy in larger samples.

  17. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  18. Empirical estimation of genome-wide significance thresholds based on the 1000 Genomes Project data set.

    PubMed

    Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori

    2016-10-01

    To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10 -8 , the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were P sig =3.24 × 10 -8 (AFR), 9.26 × 10 -8 (EUR), 1.83 × 10 -7 (AMR), 1.61 × 10 -7 (EAS) and 9.46 × 10 -8 (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded P sig =3.25 × 10 -8 (ALL) and 4.20 × 10 -8 (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10 -8 ) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples.

  19. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  20. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  1. Pose estimation of industrial objects towards robot operation

    NASA Astrophysics Data System (ADS)

    Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu

    2017-10-01

    With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.

  2. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  3. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  4. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  5. Estimation of object motion parameters from noisy images.

    PubMed

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  6. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  7. Perceived object stability depends on multisensory estimates of gravity.

    PubMed

    Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H

    2011-04-27

    How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.

  8. Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes.

    PubMed

    Sales, Marcelo M; Campbell, Carmen Sílvia G; Morais, Pâmella K; Ernesto, Carlos; Soares-Caldeira, Lúcio F; Russo, Paulo; Motta, Daisy F; Moreira, Sérgio R; Nakamura, Fábio Y; Simões, Herbert G

    2011-01-12

    While several studies have identified the anaerobic threshold (AT) through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV) as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT) and ventilatory threshold (VT) for individuals with type 2 diabetes (T2D) has not been investigated yet. To analyze the possibility of identifying the heart rate variability threshold (HRVT) by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D) and non diabetics individuals (ND). Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1)) and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1)) underwent to an incremental exercise test (IT) on a cycle ergometer. Heart rate (HR), rate of perceived exertion (RPE), blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD) and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1) by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1), respectively. No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87) and ND (r = 0.91 to 0.98) and the Bland & Altman technique confirmed the agreement among them. The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD) were demonstrated to be valid to estimate the LT and VT for both T2D and ND.

  9. Fitting psychometric functions using a fixed-slope parameter: an advanced alternative for estimating odor thresholds with data generated by ASTM E679.

    PubMed

    Peng, Mei; Jaeger, Sara R; Hautus, Michael J

    2014-03-01

    Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.

  10. Application of a Threshold Method to the TRMM Radar for the Estimation of Space-Time Rain Rate Statistics

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    1997-01-01

    One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.

  11. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives

  12. A decision model to estimate a risk threshold for venous thromboembolism prophylaxis in hospitalized medical patients.

    PubMed

    Le, P; Martinez, K A; Pappas, M A; Rothberg, M B

    2017-06-01

    Essentials Low risk patients don't require venous thromboembolism (VTE) prophylaxis; low risk is unquantified. We used a Markov model to estimate the risk threshold for VTE prophylaxis in medical inpatients. Prophylaxis was cost-effective for an average medical patient with a VTE risk of ≥ 1.0%. VTE prophylaxis can be personalized based on patient risk and age/life expectancy. Background Venous thromboembolism (VTE) is a common preventable condition in medical inpatients. Thromboprophylaxis is recommended for inpatients who are not at low risk of VTE, but no specific risk threshold for prophylaxis has been defined. Objective To determine a threshold for prophylaxis based on risk of VTE. Patients/Methods We constructed a decision model with a decision-tree following patients for 3 months after hospitalization, and a lifetime Markov model with 3-month cycles. The model tracked symptomatic deep vein thromboses and pulmonary emboli, bleeding events and heparin-induced thrombocytopenia. Long-term complications included recurrent VTE, post-thrombotic syndrome and pulmonary hypertension. For the base case, we considered medical inpatients aged 66 years, having a life expectancy of 13.5 years, VTE risk of 1.4% and bleeding risk of 2.7%. Patients received enoxaparin 40 mg day -1 for prophylaxis. Results Assuming a willingness-to-pay (WTP) threshold of $100 000/ quality-adjusted life year (QALY), prophylaxis was indicated for an average medical inpatient with a VTE risk of ≥ 1.0% up to 3 months after hospitalization. For the average patient, prophylaxis was not indicated when the bleeding risk was > 8.1%, the patient's age was > 73.4 years or the cost of enoxaparin exceeded $60/dose. If VTE risk was < 0.26% or bleeding risk was > 19%, the risks of prophylaxis outweighed benefits. The prophylaxis threshold was relatively insensitive to low-molecular-weight heparin cost and bleeding risk, but very sensitive to patient age and life expectancy. Conclusions The decision to

  13. Comparison of DNA fragmentation and color thresholding for objective quantitation of apoptotic cells

    NASA Technical Reports Server (NTRS)

    Plymale, D. R.; Ng Tang, D. S.; Fermin, C. D.; Lewis, D. E.; Martin, D. S.; Garry, R. F.

    1995-01-01

    Apoptosis is a process of cell death characterized by distinctive morphological changes and fragmentation of cellular DNA. Using video imaging and color thresholding techniques, we objectively quantitated the number of cultured CD4+ T-lymphoblastoid cells (HUT78 cells, RH9 subclone) displaying morphological signs of apoptosis before and after exposure to gamma-irradiation. The numbers of apoptotic cells measured by objective video imaging techniques were compared to numbers of apoptotic cells measured in the same samples by sensitive apoptotic assays that quantitate DNA fragmentation. DNA fragmentation assays gave consistently higher values compared with the video imaging assays that measured morphological changes associated with apoptosis. These results suggest that substantial DNA fragmentation can precede or occur in the absence of the morphological changes which are associated with apoptosis in gamma-irradiated RH9 cells.

  14. Estimating economic thresholds for pest control: an alternative procedure.

    PubMed

    Ramirez, O A; Saunders, J L

    1999-04-01

    An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.

  15. An Objective Estimation of Air-Bone-Gap in Cochlear Implant Recipients with Residual Hearing Using Electrocochleography.

    PubMed

    Koka, Kanthaiah; Saoji, Aniket A; Attias, Joseph; Litvak, Leonid M

    2017-01-01

    Although, cochlear implants (CI) traditionally have been used to treat individuals with bilateral profound sensorineural hearing loss, a recent trend is to implant individuals with residual low-frequency hearing. Notably, many of these individuals demonstrate an air-bone gap (ABG) in low-frequency, pure-tone thresholds following implantation. An ABG is the difference between audiometric thresholds measured using air conduction (AC) and bone conduction (BC) stimulation. Although, behavioral AC thresholds are straightforward to assess, BC thresholds can be difficult to measure in individuals with severe-to-profound hearing loss because of vibrotactile responses to high-level, low-frequency stimulation and the potential contribution of hearing in the contralateral ear. Because of these technical barriers to measuring behavioral BC thresholds in implanted patients with residual hearing, it would be helpful to have an objective method for determining ABG. This study evaluated an innovative technique for measuring electrocochleographic (ECochG) responses using the cochlear microphonic (CM) response to assess AC and BC thresholds in implanted patients with residual hearing. Results showed high correlations between CM thresholds and behavioral audiograms for AC and BC conditions, thereby demonstrating the feasibility of using ECochG as an objective tool for quantifying ABG in CI recipients.

  16. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  17. Non-invasive indices for the estimation of the anaerobic threshold of oarsmen.

    PubMed

    Erdogan, A; Cetin, C; Karatosun, H; Baydar, M L

    2010-01-01

    This study compared four common non-invasive indices with an invasive index for determining the anaerobic threshold (AT) in 22 adult male rowers using a Concept2 rowing ergometer. A criterion-standard progressive incremental test (invasive method) measured blood lactate concentrations to determine the 4 mmol/l threshold (La4-AT) and Dmax AT (Dm-AT). This was compared with three indices obtained by analysis of respiratory gases and one that was based on the heart rate (HR) deflection point (HRDP) all of which used the Conconi test (non-invasive methods). In the Conconi test, the HRDP was determined whilst continuously increasing the power output (PO) by 25 W/min and measuring respiratory gases and HR. The La4-AT and Dm-AT values differed slightly with respect to oxygen uptake, PO and HR however, AT values significantly correlated with each other and with the four non-invasive methods. In conclusion, the non-invasive indices were comparable with the invasive index and could, therefore, be used in the assessment of AT during rowing ergometer use. In this population of elite rowers, Conconi threshold (Con-AT), based on the measurement of HRDP tended to be the most adequate way of estimating AT for training regulation purposes.

  18. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  19. Approach for estimating the dynamic physical thresholds of phytoplankton production and biomass in the tropical-subtropical Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo

    2017-06-01

    Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.

  20. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  1. Model for Estimating the Threshold Mechanical Stability of Structural Cartilage Grafts Used in Rhinoplasty

    PubMed Central

    Zemek, Allison; Garg, Rohit; Wong, Brian J. F.

    2014-01-01

    Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022

  2. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  3. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  4. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  5. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  6. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  7. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  8. Three Dimensional Constraint Effects on the Estimated (Delta)CTOD during the Numerical Simulation of Different Fatigue Threshold Testing Techniques

    NASA Technical Reports Server (NTRS)

    Seshadri, Banavara R.; Smith, Stephen W.

    2007-01-01

    Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is

  9. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  10. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search

    PubMed Central

    Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.

    2017-01-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532

  11. Improved Shear Wave Group Velocity Estimation Method Based on Spatiotemporal Peak and Thresholding Motion Search.

    PubMed

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W

    2017-04-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.

  12. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  13. The Impact of Clinical History on the Threshold Estimation of Auditory Brainstem Response Results for Infants

    ERIC Educational Resources Information Center

    Zaitoun, Maha; Cumming, Steven; Purcell, Alison; O'Brien, Katie

    2017-01-01

    Purpose: This study assesses the impact of patient clinical history on audiologists' performance when interpreting auditory brainstem response (ABR) results. Method: Fourteen audiologists' accuracy in estimating hearing threshold for 16 infants through interpretation of ABR traces was compared on 2 occasions at least 5 months apart. On the 1st…

  14. Effects of self-generated noise on estimates of detection threshold in quiet for school-age children and adults

    PubMed Central

    Buss, Emily; Porter, Heather L.; Leibold, Lori J.; Grose, John H.; Hall, Joseph W.

    2016-01-01

    Objectives Detection thresholds in quiet become adult-like earlier in childhood for high than low frequencies. When adults listen for sounds near threshold, they tend to engage in behaviors that reduce physiologic noise (e.g., quiet breathing), which is predominantly low frequency. Children may not suppress self-generated noise to the same extent as adults, such that low-frequency self-generated noise elevates thresholds in the associated frequency regions. This possibility was evaluated by measuring noise levels in the ear canal simultaneous with adaptive threshold estimation. Design Listeners were normal-hearing children (4.3-16.0 yrs) and adults. Detection thresholds were measured adaptively for 250-, 1000- and 4000-Hz pure tones using a three-alternative forced-choice procedure. Recordings of noise in the ear canal were made while the listeners performed this task, with the earphone and microphone routed through a single foam insert. Levels of self-generated noise were computed in octave-wide bands. Age effects were evaluated for four groups: 4- to 6-year-olds, 7- to 10-year-olds, 11- to 16-year-olds, and adults. Results Consistent with previous data, the effect of child age on thresholds was robust at 250 Hz and fell off at higher frequencies; thresholds of even the youngest listeners were similar to adults’ at 4000 Hz. Self-generated noise had a similar low-pass spectral shape for all age groups, although the magnitude of self-generated noise was higher in younger listeners. If self-generated noise impairs detection, then noise levels should be higher for trials associated with the wrong answer than the right answer. This association was observed for all listener groups at the 250-Hz signal frequency. For adults and older children, this association was limited to the noise band centered on the 250-Hz signal. For the two younger groups of children, this association was strongest at the signal frequency, but extended to bands spectrally remote from the 250-Hz

  15. Deactivating stimulation sites based on low-rate thresholds improves spectral ripple and speech reception thresholds in cochlear implant users.

    PubMed

    Zhou, Ning

    2017-03-01

    The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.

  16. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  17. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results

  18. Object tracking algorithm based on the color histogram probability distribution

    NASA Astrophysics Data System (ADS)

    Li, Ning; Lu, Tongwei; Zhang, Yanduo

    2018-04-01

    In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.

  19. Temperature Thresholds and Thermal Requirements for the Development of the Rice Leaf Folder, Cnaphalocrocis medinalis

    PubMed Central

    Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.

    2013-01-01

    The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891

  20. Estimating background and threshold nitrate concentrations using probability graphs

    USGS Publications Warehouse

    Panno, S.V.; Kelly, W.R.; Martinsek, A.T.; Hackley, Keith C.

    2006-01-01

    Because of the ubiquitous nature of anthropogenic nitrate (NO 3-) in many parts of the world, determining background concentrations of NO3- in shallow ground water from natural sources is probably impossible in most environments. Present-day background must now include diffuse sources of NO3- such as disruption of soils and oxidation of organic matter, and atmospheric inputs from products of combustion and evaporation of ammonia from fertilizer and livestock waste. Anomalies can be defined as NO3- derived from nitrogen (N) inputs to the environment from anthropogenic activities, including synthetic fertilizers, livestock waste, and septic effluent. Cumulative probability graphs were used to identify threshold concentrations separating background and anomalous NO3-N concentrations and to assist in the determination of sources of N contamination for 232 spring water samples and 200 well water samples from karst aquifers. Thresholds were 0.4, 2.5, and 6.7 mg/L for spring water samples, and 0.1, 2.1, and 17 mg/L for well water samples. The 0.4 and 0.1 mg/L values are assumed to represent thresholds for present-day precipitation. Thresholds at 2.5 and 2.1 mg/L are interpreted to represent present-day background concentrations of NO3-N. The population of spring water samples with concentrations between 2.5 and 6.7 mg/L represents an amalgam of all sources of NO3- in the ground water basins that feed each spring; concentrations >6.7 mg/L were typically samples collected soon after springtime application of synthetic fertilizer. The 17 mg/L threshold (adjusted to 15 mg/L) for well water samples is interpreted as the level above which livestock wastes dominate the N sources. Copyright ?? 2006 The Author(s).

  1. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  2. Perceptual thresholds for non-ideal diffuse field reverberation.

    PubMed

    Romblom, David; Guastavino, Catherine; Depalle, Philippe

    2016-11-01

    The objective of this study is to understand listeners' sensitivity to directional variations in non-ideal diffuse field reverberation. An ABX discrimination test was conducted using a semi-spherical 28-loudspeaker array; perceptual thresholds were estimated by systematically varying the level of a segment of loudspeakers for lateral, height, and frontal conditions. The overall energy was held constant using a gain compensation scheme. When compared to an ideal diffuse field, the perceptual threshold for detection is -2.5 dB for the lateral condition, -6.8 dB for the height condition, and -3.2 dB for the frontal condition. Measurements of the experimental stimuli were analyzed using a Head and Torso Simulator as well as with opposing cardioid microphones aligned on the three Cartesian axes. Additionally, opposing cardioid measurements made in an acoustic space demonstrate that level differences corresponding to the perceptual thresholds can be found in practice. These results suggest that non-ideal diffuse field reverberation may be a previously unrecognized component of spatial impression.

  3. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  4. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    USDA-ARS?s Scientific Manuscript database

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  5. Estimation of signal coherence threshold and concealed spectral lines applied to detection of turbofan engine combustion noise.

    PubMed

    Miles, Jeffrey Hilton

    2011-05-01

    Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise.

  6. A geographic analysis of population density thresholds in the influenza pandemic of 1918-19.

    PubMed

    Chandra, Siddharth; Kassens-Noor, Eva; Kuljanin, Goran; Vertalka, Joshua

    2013-02-20

    Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918-19 in India, where over 15 million people died in the short span of less than one year. Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918-19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold.

  7. Variable-Threshold Threshold Elements,

    DTIC Science & Technology

    A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the

  8. Games With Estimation of Non-Damage Objectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    1998-09-14

    Games against nature illustrate the role of non-damage objectives in producing conflict with uncertain rewards and the role of probing and estimation in reducing that uncertainty and restoring optimal strategies. This note discusses two essential elements of the analysis of crisis stability omitted from current treatments based on first strike stability: the role of an objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential interactions that could cause those conflicts to deepen. Games against nature illustrate role of objectives and uncertainty that are at the core of detailed treatments of crisis stability. These modelsmore » can also illustrate how these games processes can generate and deepen crises and the optimal strategies that might be used to end them. This note discusses two essential elements of the analysis of crisis stability that are omitted from current treatments based on first strike stability: anon-damage objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential tests that could cause those conflicts to deepen. The model used is a game against nature, simplified sufficiently to make the role of each of those elements obvious.« less

  9. Automated cortical auditory evoked potentials threshold estimation in neonates.

    PubMed

    Oliveira, Lilian Sanches; Didoné, Dayane Domeneghini; Durante, Alessandra Spada

    2018-02-02

    The evaluation of Cortical Auditory Evoked Potential has been the focus of scientific studies in infants. Some authors have reported that automated response detection is effective in exploring these potentials in infants, but few have reported their efficacy in the search for thresholds. To analyze the latency, amplitude and thresholds of Cortical Auditory Evoked Potential using an automatic response detection device in a neonatal population. This is a cross-sectional, observational study. Cortical Auditory Evoked Potentials were recorded in response to pure-tone stimuli of the frequencies 500, 1000, 2000 and 4000Hz presented in an intensity range between 0 and 80dB HL using a single channel recording. P1 was performed in an exclusively automated fashion, using Hotelling's T 2 statistical test. The latency and amplitude were obtained manually by three examiners. The study comprised 39 neonates up to 28 days old of both sexes with presence of otoacoustic emissions and no risk factors for hearing loss. With the protocol used, Cortical Auditory Evoked Potential responses were detected in all subjects at high intensity and thresholds. The mean thresholds were 24.8±10.4dB NA, 25±9.0dB NA, 28±7.8dB NA and 29.4±6.6dB HL for 500, 1000, 2000 and 4000Hz, respectively. Reliable responses were obtained in the assessment of cortical auditory potentials in the neonates assessed with a device for automatic response detection. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  10. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  11. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  12. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  13. Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects

    DTIC Science & Technology

    2017-02-22

    AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-14-1-0183 5c.  PROGRAM

  14. Is it valid to calculate the 3-kilohertz threshold by averaging 2 and 4 kilohertz?

    PubMed

    Gurgel, Richard K; Popelka, Gerald R; Oghalai, John S; Blevins, Nikolas H; Chang, Kay W; Jackler, Robert K

    2012-07-01

    Many guidelines for reporting hearing results use the threshold at 3 kilohertz (kHz), a frequency not measured routinely. This study assessed the validity of estimating the missing 3-kHz threshold by averaging the measured thresholds at 2 and 4 kHz. The estimated threshold was compared to the measured threshold at 3 kHz individually and when used in the pure-tone average (PTA) of 0.5, 1, 2, and 3 kHz in audiometric data from 2170 patients. The difference between the estimated and measured thresholds for 3 kHz was within ± 5 dB in 72% of audiograms, ± 10 dB in 91%, and within ± 20 dB in 99% (correlation coefficient r = 0.965). The difference between the PTA threshold using the estimated threshold compared with using the measured threshold at 3 kHz was within ± 5 dB in 99% of audiograms (r = 0.997). The estimated threshold accurately approximates the measured threshold at 3 kHz, especially when incorporated into the PTA.

  15. Monopolar Detection Thresholds Predict Spatial Selectivity of Neural Excitation in Cochlear Implants: Implications for Speech Recognition

    PubMed Central

    2016-01-01

    The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech

  16. Compositional threshold for Nuclear Waste Glass Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-04-24

    Within the composition space of glasses, a distinct threshold appears to exist that separates "good" glasses, i.e., those which are sufficiently durable, from "bad" glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region.

  17. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  18. Estimation of Signal Coherence Threshold and Concealed Spectral Lines Applied to Detection of Turbofan Engine Combustion Noise

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2010-01-01

    Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise. Copyright 2011 Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.

  19. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  20. Estimates of Storage Capacity of Multilayer Perceptron with Threshold Logic Hidden Units.

    PubMed

    Kowalczyk, Adam

    1997-11-01

    We estimate the storage capacity of multilayer perceptron with n inputs, h(1) threshold logic units in the first hidden layer and 1 output. We show that if the network can memorize 50% of all dichotomies of a randomly selected N-tuple of points of R(n) with probability 1, then Nestimates of VC-dimension we find that in contrast to a single neuron case, the VC-dimension exceeds the capacity for a sufficiently large n and h(1). The results are based on the derivation of an explicit expression for the number of dichotomies which can be implemented by such a network for a special class of N-tuples of input patterns which has a positive probability of being randomly chosen.

  1. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  2. Event-related potential measures of gap detection threshold during natural sleep.

    PubMed

    Muller-Gass, Alexandra; Campbell, Kenneth

    2014-08-01

    The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  3. User guide for HCR Estimator 2.0: software to calculate cost and revenue thresholds for harvesting small-diameter ponderosa pine.

    Treesearch

    Dennis R. Becker; Debra Larson; Eini C. Lowell; Robert B. Rummer

    2008-01-01

    The HCR (Harvest Cost-Revenue) Estimator is engineering and financial analysis software used to evaluate stand-level financial thresholds for harvesting small-diameter ponderosa pine (Pinus ponderosa Dougl. ex Laws.) in the Southwest United States. The Windows-based program helps contractors and planners to identify costs associated with tree...

  4. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  5. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  6. MRI-leukoaraiosis thresholds and the phenotypic expression of dementia

    PubMed Central

    Mitchell, Sandra M.; Brumback, Babette; Tanner, Jared J.; Schmalfuss, Ilona; Lamar, Melissa; Giovannetti, Tania; Heilman, Kenneth M.; Libon, David J.

    2012-01-01

    Objective: To examine the concept of leukoaraiosis thresholds on working memory, visuoconstruction, memory, and language in dementia. Methods: A consecutive series of 83 individuals with insidious onset/progressive dementia clinically diagnosed with Alzheimer disease (AD) or small vessel vascular dementia (VaD) completed neuropsychological measures assessing working memory, visuoconstruction, episodic memory, and language. A clinical MRI scan was used to quantify leukoaraiosis, total white matter, hippocampus, lacune, and intracranial volume. We performed analyses to detect the lowest level of leukoaraiosis associated with impairment on the neuropsychological measures. Results: Leukoaraiosis ranged from 0.63% to 23.74% of participants' white matter. Leukoaraiosis explained a significant amount of variance in working memory performance when it involved 3% or more of the white matter with curve estimations showing the relationship to be nonlinear in nature. Greater leukoaraiosis (13%) was implicated for impairment in visuoconstruction. Relationships between leukoaraiosis, episodic memory, and language measures were linear or flat. Conclusions: Leukoaraiosis involves specific threshold points for working memory and visuoconstructional tests in AD/VaD spectrum dementia. These data underscore the need to better understand the threshold at which leukoaraiosis affects and alters the phenotypic expression in insidious onset dementia syndromes. PMID:22843264

  7. Using the product threshold model for estimating separately the effect of temperature on male and female fertility.

    PubMed

    Tusell, L; David, I; Bodin, L; Legarra, A; Rafel, O; López-Bejar, M; Piles, M

    2011-12-01

    Animals under environmental thermal stress conditions have reduced fertility due to impairment of some mechanisms involved in their reproductive performance that are different in males and females. As a consequence, the most sensitive periods of time and the magnitude of effect of temperature on fertility can differ between sexes. The objective of this study was to estimate separately the effect of temperature in different periods around the insemination time on male and on female fertility by using the product threshold model. This model assumes that an observed reproduction outcome is the result of the product of 2 unobserved variables corresponding to the unobserved fertilities of the 2 individuals involved in the mating. A total of 7,625 AI records from rabbits belonging to a line selected for growth rate and indoor daily temperature records were used. The average maximum daily temperature and the proportion of days in which the maximum temperature was greater than 25°C were used as temperature descriptors. These descriptors were calculated for several periods around the day of AI. In the case of males, 4 periods of time covered different stages of the spermatogenesis, the transit through the epididymus of the sperm, and the day of AI. For females, 5 periods of time covered the phases of preovulatory follicular maturation including day of AI and ovulation, fertilization and peri-implantational stage of the embryos, embryonic and early fetal periods of gestation, and finally, late gestation until birth. The effect of the different temperature descriptors was estimated in the corresponding male and female liabilities in a set of threshold product models. The temperature of the day of AI seems to be the most relevant temperature descriptor affecting male fertility because greater temperature records on the day of AI caused a decrease in male fertility (-6% in male fertility rate with respect to thermoneutrality). Departures from the thermal zone in temperature

  8. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  9. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  10. Acoustic Reflexes in Normal-Hearing Adults, Typically Developing Children, and Children with Suspected Auditory Processing Disorder: Thresholds, Real-Ear Corrections, and the Role of Static Compliance on Estimates.

    PubMed

    Saxena, Udit; Allan, Chris; Allen, Prudence

    2017-06-01

    Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous

  11. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  12. Space Object Classification and Characterization Via Multiple Model Adaptive Estimation

    DTIC Science & Technology

    2014-07-14

    BRDF ) which models light distribution scattered from the surface due to the incident light. The BRDF at any point on the surface is a function of two...uu B vu B nu obs I u sun I u I hu (b) Reflection Geometry Fig. 2: Reflection Geometry and Space Object Shape Model of the BRDF is ρdiff(i...Space Object Classification and Characterization Via Multiple Model Adaptive Estimation Richard Linares Director’s Postdoctoral Fellow Space Science

  13. A geographic analysis of population density thresholds in the influenza pandemic of 1918–19

    PubMed Central

    2013-01-01

    Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498

  14. Viral Load Criteria and Threshold Optimization to Improve HIV Incidence Assay Characteristics - A CEPHIA Analysis

    PubMed Central

    Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex

    2016-01-01

    Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561

  15. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  16. Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.

    PubMed

    Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew

    2018-01-01

    Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  17. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  18. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  19. Perceptual color difference metric including a CSF based on the perception threshold

    NASA Astrophysics Data System (ADS)

    Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.

  20. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  1. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  2. Generalised form of a power law threshold function for rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders

    2010-05-01

    The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.

  3. Threshold altitude resulting in decompression sickness

    NASA Technical Reports Server (NTRS)

    Kumar, K. V.; Waligora, James M.; Calkins, Dick S.

    1990-01-01

    A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.

  4. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  5. Threshold groundwater ages and young water fractions estimated from 3H, 3He, and 14C

    NASA Astrophysics Data System (ADS)

    Kirchner, James; Jasechko, Scott

    2016-04-01

    It is widely recognized that a water sample taken from a running stream is not described by a single age, but rather by a distribution of ages. It is less widely recognized that the same principle holds true for groundwaters, as indicated by the commonly observed discordances between model ages obtained from different tracers (e.g., 3H vs 14C) in the same sample. Water age distributions are often characterized by their mean residence times (MRT's). However, MRT estimates are highly uncertain because they depend on the shape of the assumed residence time distribution (in particular on the thickness of the long-time tail), which is difficult or impossible to constrain with data. Furthermore, because MRT's are typically nonlinear functions of age tracer concentrations, they are subject to aggregation bias. That is, MRT estimates derived from a mixture of waters with different ages (and thus different tracer concentrations) will systematically underestimate the mixture's true mean age. Here, building on recent work with stable isotope tracers in surface waters [1-3], we present a new framework for using 3H, 3He and 14C to characterize groundwater age distributions. Rather than describing groundwater age distributions by their MRT, we characterize them by the fraction of the distribution that is younger or older than a threshold age. The threshold age that separates "young" from "old" water depends on the characteristics of the specific tracer, including its history of atmospheric inputs. Our approach depends only on whether a given slice of the age distribution is younger or older than the threshold age, but not on how much younger or older it is. Thus our approach is insensitive to the tails of the age distribution, and is therefore relatively unaffected by uncertainty in the distribution's shape. Here we show that concentrations of 3H, 3He, and 14C are almost linearly related to the fractions of water that are younger or older than specified threshold ages. These

  6. Objective definition of rainfall intensity-duration thresholds for post-fire flash floods and debris flows in the area burned by the Waldo Canyon fire, Colorado, USA

    USGS Publications Warehouse

    Staley, Dennis M.; Gartner, Joseph E.; Kean, Jason W.

    2015-01-01

    We present an objectively defined rainfall intensity-duration (I-D) threshold for the initiation of flash floods and debris flows for basins recently burned in the 2012 Waldo Canyon fire near Colorado Springs, Colorado, USA. Our results are based on 453 rainfall records which include 8 instances of hazardous flooding and debris flow from 10 July 2012 to 14 August 2013. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow or flood occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. The equation I = 11.6D−0.7 represents the I-D threshold (I, in mm/h) for durations (D, in hours) ranging from 0.083 h (5 min) to 1 h for basins burned by the 2012 Waldo Canyon fire. As periods of high-intensity rainfall over short durations (less than 1 h) produced all of the debris flow and flood events, real-time monitoring of rainfall conditions will result in very short lead times for early-warning. Our results highlight the need for improved forecasting of the rainfall rates during short-duration, high-intensity convective rainfall events.

  7. Relationship between parental estimate and an objective measure of child television watching

    PubMed Central

    Robinson, Jodie L; Winiewicz, Dana D; Fuerch, Janene H; Roemmich, James N; Epstein, Leonard H

    2006-01-01

    Many young children have televisions in their bedrooms, which may influence the relationship between parental estimate and objective measures of child television usage/week. Parental estimates of child television time of eighty 4–7 year old children (6.0 ± 1.2 years) at the 75th BMI percentile or greater (90.8 ± 6.8 BMI percentile) were compared to an objective measure of television time obtained from TV Allowance™ devices attached to every television in the home over a three week period. Results showed that parents overestimate their child's television time compared to an objective measure when no television is present in the bedroom by 4 hours/week (25.4 ± 11.5 vs. 21.4 ± 9.1) in comparison to underestimating television time by over 3 hours/week (26.5 ± 17.2 vs. 29.8 ± 14.4) when the child has a television in their bedroom (p = 0.02). Children with a television in their bedroom spend more objectively measured hours in television time than children without a television in their bedroom (29.8 ± 14.2 versus 21.4 ± 9.1, p = 0.003). Research on child television watching should take into account television watching in bedrooms, since it may not be adequately assessed by parental estimates. PMID:17129381

  8. Estimating effective data density in a satellite retrieval or an objective analysis

    NASA Technical Reports Server (NTRS)

    Purser, R. J.; Huang, H.-L.

    1993-01-01

    An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.

  9. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    PubMed

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  10. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    NASA Astrophysics Data System (ADS)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  11. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  12. Using thresholds based on risk of cardiovascular disease to target treatment for hypertension: modelling events averted and number treated

    PubMed Central

    Baker, Simon; Priest, Patricia; Jackson, Rod

    2000-01-01

    Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to

  13. The impact of cochlear fine structure on hearing thresholds and DPOAE levels

    NASA Astrophysics Data System (ADS)

    Lee, Jungmee; Long, Glenis; Talmadge, Carrick L.

    2004-05-01

    Although otoacoustic emissions (OAE) are used as clinical and research tools, the correlation between OAE behavioral estimates of hearing status is not large. In normal-hearing individuals, the level of OAEs can vary as much as 30 dB when the frequency is changed less than 5%. These pseudoperiodic variations of OAE level with frequency are known as fine structure. Hearing thresholds measured with high-frequency resolution reveals a similar (up to 15 dB) fine structure. We examine the impact of OAE and threshold fine structures on the prediction of auditory thresholds from OAE levels. Distortion product otoacoustic emissions (DPOAEs) were measured with sweeping primary tones. Psychoacoustic detection thresholds were measured using pure tones, sweep tones, FM tones, and narrow-band noise. Sweep DPOAE and narrow-band threshold estimates provide estimates that are less influenced by cochlear fine structure and should lead to a higher correlation between OAE levels and psychoacoustic thresholds. [Research supported by PSC CUNY, NIDCD, National Institute on Disability and Rehabilitation Research in U.S. Department of Education, and The Ministry of Education in Korea.

  14. Threshold selection for classification of MR brain images by clustering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less

  15. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  16. A comparison of signal detection theory to the objective threshold/strategic model of unconscious perception.

    PubMed

    Haase, Steven J; Fisk, Gary D

    2011-08-01

    A key problem in unconscious perception research is ruling out the possibility that weak conscious awareness of stimuli might explain the results. In the present study, signal detection theory was compared with the objective threshold/strategic model as explanations of results for detection and identification sensitivity in a commonly used unconscious perception task. In the task, 64 undergraduate participants detected and identified one of four briefly displayed, visually masked letters. Identification was significantly above baseline (i.e., proportion correct > .25) at the highest detection confidence rating. This result is most consistent with signal detection theory's continuum of sensory states and serves as a possible index of conscious perception. However, there was limited support for the other model in the form of a predicted "looker's inhibition" effect, which produced identification performance that was significantly below baseline. One additional result, an interaction between the target stimulus and type of mask, raised concerns for the generality of unconscious perception effects.

  17. STIMULUS AND TRANSDUCER EFFECTS ON THRESHOLD

    PubMed Central

    Flamme, Gregory A.; Geda, Kyle; McGregor, Kara; Wyllys, Krista; Deiters, Kristy K.; Murphy, William J.; Stephenson, Mark R.

    2015-01-01

    Objective This study examined differences in thresholds obtained under Sennheiser HDA200 circumaural earphones using pure tone, equivalent rectangular noise bands, and 1/3 octave noise bands relative to thresholds obtained using Telephonics TDH-39P supra-aural earphones. Design Thresholds were obtained via each transducer and stimulus condition six times within a 10-day period. Study Sample Forty-nine adults were selected from a prior study to represent low, moderate, and high threshold reliability. Results The results suggested that (1) only small adjustments were needed to reach equivalent TDH-39P thresholds, (2) pure-tone thresholds obtained with HDA200 circumaural earphones had reliability equal to or better than those obtained using TDH-39P earphones, (3) the reliability of noise-band thresholds improved with broader stimulus bandwidth and was either equal to or better than pure-tone thresholds, and (4) frequency-specificity declined with stimulus bandwidths greater than one Equivalent Rectangular Band, which could complicate early detection of hearing changes that occur within a narrow frequency range. Conclusions These data suggest that circumaural earphones such as the HDA200 headphones provide better reliability for audiometric testing as compared to the TDH-39P earphones. These data support the use of noise bands, preferably ERB noises, as stimuli for audiometric monitoring. PMID:25549164

  18. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  19. Improved Estimation of Orbits and Physical Properties of Objects in GEO

    NASA Astrophysics Data System (ADS)

    Bradley, B.; Axelrad, P.

    2013-09-01

    Orbital debris is a major concern for satellite operators, both commercial and military. Debris in the geosynchronous (GEO) belt is of particular concern because this unique region is such a valuable, limited resource, and, from the ground we cannot reliably track and characterize GEO objects smaller than 1 meter in diameter. Space-based space surveillance (SBSS) is required to observe GEO objects without weather restriction and with improved viewing geometry. SBSS satellites have thus far been placed in Sun-synchronous orbits. This paper investigates the benefits to GEO orbit determination (including the estimation of mass, area, and shape) that arises from placing observing satellites in geosynchronous transfer orbit (GTO) and a sub-GEO orbit. Recently, several papers have reported on simulation studies to estimate orbits and physical properties; however, these studies use simulated objects and ground-based measurements, often with dense and long data arcs. While this type of simulation provides valuable insight into what is possible, as far as state estimation goes, it is not a very realistic observing scenario and thus may not yield meaningful accuracies. Our research improves upon simulations published to date by utilizing publicly available ephemerides for the WAAS satellites (Anik F1R and Galaxy 15), accurate at the meter level. By simulating and deliberately degrading right ascension and declination observations, consistent with these ephemerides, a realistic assessment of the achievable orbit determination accuracy using GTO and sub-GEO SBSS platforms is performed. Our results show that orbit accuracy is significantly improved as compared to a Sun-synchronous platform. Physical property estimation is also performed using simulated astrometric and photometric data taken from GTO and sub-GEO sensors. Simulations of SBSS-only as well as combined SBSS and ground-based observation tracks are used to study the improvement in area, mass, and shape estimation

  20. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  1. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  2. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  3. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  4. Experimental evidence of a pathogen invasion threshold

    PubMed Central

    Krkošek, Martin

    2018-01-01

    Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876

  5. Threshold concepts: implications for the management of natural resources

    USGS Publications Warehouse

    Guntenspergen, Glenn R.; Gross, John

    2014-01-01

    Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.

  6. Inclusion of Exercise Intensities Above the Lactate Threshold in VO2/Running Speed Regression Does not Improve the Precision of Accumulated Oxygen Deficit Estimation in Endurance-Trained Runners

    PubMed Central

    Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.

    2005-01-01

    The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have

  7. Fuzzy connected object definition in images with respect to co-objects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Saha, Punam K.; Lotufo, Roberto A.

    1999-05-01

    Tangible solutions to practical image segmentation are vital to ensure progress in many applications of medical imaging. Toward this goal, we previously proposed a theory and algorithms for fuzzy connected object definition in n- dimensional images. Their effectiveness has been demonstrated in several applications including multiple sclerosis lesion detection/delineation, MR Angiography, and craniofacial imaging. The purpose of this work is to extend the earlier theory and algorithms to fuzzy connected object definition that considers all relevant objects in the image simultaneously. In the previous theory, delineation of the final object from the fuzzy connectivity scene required the selection of a threshold that specifies the weakest `hanging-togetherness' of image elements relative to each other in the object. Selection of such a threshold was not trivial and has been an active research area. In the proposed method of relative fuzzy connectivity, instead of defining an object on its own based on the strength of connectedness, all co-objects of importance that are present in the image are also considered and the objects are let to compete among themselves in having image elements as their members. In this competition, every pair of elements in the image will have a strength of connectedness in each object. The object in which this strength is highest will claim membership of the elements. This approach to fuzzy object definition using a relative strength of connectedness eliminates the need for a threshold of strength of connectedness that was part of the previous definition. It seems to be more natural since it relies on the fact that an object gets defined in an image by the presence of other objects that coexist in the image. All specified objects are defined simultaneously in this approach. The concept of iterative relative fuzzy connectivity has also been introduced. Robustness of relative fuzzy objects with respect to selection of reference image elements

  8. Uncertainties in extreme surge level estimates from observational records.

    PubMed

    van den Brink, H W; Können, G P; Opsteegh, J D

    2005-06-15

    Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.

  9. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  10. Simulation analysis of photometric data for attitude estimation of unresolved space objects

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang

    2017-10-01

    The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.

  11. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  12. Bone-anchored Hearing Aids: correlation between pure-tone thresholds and outcome in three user groups.

    PubMed

    Pfiffner, Flurin; Kompis, Martin; Stieger, Christof

    2009-10-01

    To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.

  13. Determination of Cost-Effectiveness Threshold for Health Care Interventions in Malaysia.

    PubMed

    Lim, Yen Wei; Shafie, Asrul Akmal; Chua, Gin Nie; Ahmad Hassali, Mohammed Azmi

    2017-09-01

    One major challenge in prioritizing health care using cost-effectiveness (CE) information is when alternatives are more expensive but more effective than existing technology. In such a situation, an external criterion in the form of a CE threshold that reflects the willingness to pay (WTP) per quality-adjusted life-year is necessary. To determine a CE threshold for health care interventions in Malaysia. A cross-sectional, contingent valuation study was conducted using a stratified multistage cluster random sampling technique in four states in Malaysia. One thousand thirteen respondents were interviewed in person for their socioeconomic background, quality of life, and WTP for a hypothetical scenario. The CE thresholds established using the nonparametric Turnbull method ranged from MYR12,810 to MYR22,840 (~US $4,000-US $7,000), whereas those estimated with the parametric interval regression model were between MYR19,929 and MYR28,470 (~US $6,200-US $8,900). Key factors that affected the CE thresholds were education level, estimated monthly household income, and the description of health state scenarios. These findings suggest that there is no single WTP value for a quality-adjusted life-year. The CE threshold estimated for Malaysia was found to be lower than the threshold value recommended by the World Health Organization. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  15. The correlation dimension: a useful objective measure of the transient visual evoked potential?

    PubMed

    Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J

    2008-01-14

    Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.

  16. Regional rainfall thresholds for landslide occurrence using a centenary database

    NASA Astrophysics Data System (ADS)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia

    2017-04-01

    Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was

  17. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  18. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed

  19. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  20. Threshold corrections to the bottom quark mass revisited

    DOE PAGES

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart

    2015-05-19

    Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less

  1. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  2. Estimating the critical immunity threshold for preventing hepatitis A outbreaks in men who have sex with men.

    PubMed

    Regan, D G; Wood, J G; Benevent, C; Ali, H; Smith, L Watchirs; Robertson, P W; Ferson, M J; Fairley, C K; Donovan, B; Law, M G

    2016-05-01

    Several outbreaks of hepatitis A in men who have sex with men (MSM) were reported in the 1980s and 1990s in Australia and other countries. An effective hepatitis A virus (HAV) vaccine has been available in Australia since 1994 and is recommended for high-risk groups including MSM. No outbreaks of hepatitis A in Australian MSM have been reported since 1996. In this study, we aimed to estimate HAV transmissibility in MSM populations in order to inform targets for vaccine coverage in such populations. We used mathematical models of HAV transmission in a MSM population to estimate the basic reproduction number (R 0) and the probability of an HAV epidemic occurring as a function of the immune proportion. We estimated a plausible range for R 0 of 1·71-3·67 for HAV in MSM and that sustained epidemics cannot occur once the proportion immune to HAV is greater than ~70%. To our knowledge this is the first estimate of R 0 and the critical population immunity threshold for HAV transmission in MSM. As HAV is no longer endemic in Australia or in most other developed countries, vaccination is the only means of maintaining population immunity >70%. Our findings provide impetus to promote HAV vaccination in high-risk groups such as MSM.

  3. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  4. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  5. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...

  6. Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.

    PubMed

    Gilmour, Heather

    2016-11-16

    Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.

  7. [Correlation between the thresholds acquired with ASSR and sbjective thresholds in cochlear implants in prelingually deaf children].

    PubMed

    Wang, Z; Gu, J; Jiang, X J

    2017-04-20

    Objective: To learn the relationship between the auditory steady state responses(ASSR)threshold and C-level and behavior T-level in cochlear implants in prelingually deaf children. Method: One hundred and twelve children with Nucleus CI24R(CA) cochlear implants were divided into residual hearing group and no residual hearing group on the basis of the results of ASSR before operation in this study.Compare the difference between the two groups in C-level and behavior T-level one year after operation. Result: There was difference in C-level and behavior T-level between residual hearing group and no residual hearing group( P <0.05 or P <0.01). Conclusion: According to the results of ASSR before operation,we can estimate the effect of cochlear implants,providing reference for the selection of choosing operating ears,and providing a reasonable expectation for physicians and parents of the patients. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  8. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  9. Cost and threshold analysis of an HIV/STI/hepatitis prevention intervention for young men leaving prison: Project START.

    PubMed

    Johnson, A P; Macgowan, R J; Eldridge, G D; Morrow, K M; Sosman, J; Zack, B; Margolis, A

    2013-10-01

    The objectives of this study were to: (a) estimate the costs of providing a single-session HIV prevention intervention and a multi-session intervention, and (b) estimate the number of HIV transmissions that would need to be prevented for the intervention to be cost-saving or cost-effective (threshold analysis). Project START was evaluated with 522 young men aged 18-29 years released from eight prisons located in California, Mississippi, Rhode Island, and Wisconsin. Cost data were collected prospectively. Costs per participant were $689 for the single-session comparison intervention, and ranged from $1,823 to 1,836 for the Project START multi-session intervention. From the incremental threshold analysis, the multi-session intervention would be cost-effective if it prevented one HIV transmission for every 753 participants compared to the single-session intervention. Costs are comparable with other HIV prevention programs. Program managers can use these data to gauge costs of initiating these HIV prevention programs in correctional facilities.

  10. Computational analysis of thresholds for magnetophosphenes

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-10-01

    In international guidelines, basic restriction limits on the exposure of humans to low-frequency magnetic and electric fields are set with the objective of preventing the generation of phosphenes, visual sensations of flashing light not caused by light. Measured data on magnetophosphenes, i.e. phosphenes caused by a magnetically induced electric field on the retina, are available from volunteer studies. However, there is no simple way for determining the retinal threshold electric field or current density from the measured threshold magnetic flux density. In this study, the experimental field configuration of a previous study, in which phosphenes were generated in volunteers by exposing their heads to a magnetic field between the poles of an electromagnet, is computationally reproduced. The finite-element method is used for determining the induced electric field and current in five different MRI-based anatomical models of the head. The direction of the induced current density on the retina is dominantly radial to the eyeball, and the maximum induced current density is observed at the superior and inferior sides of the retina, which agrees with literature data on the location of magnetophosphenes at the periphery of the visual field. On the basis of computed data, the macroscopic retinal threshold current density for phosphenes at 20 Hz can be estimated as 10 mA m-2 (-20% to  + 30%, depending on the anatomical model); this current density corresponds to an induced eddy current of 14 μA (-20% to  + 10%), and about 20% of this eddy current flows through each eye. The ICNIRP basic restriction limit for the induced electric field in the case of occupational exposure is not exceeded until the magnetic flux density is about two to three times the measured threshold for magnetophosphenes, so the basic restriction limit does not seem to be conservative. However, the reasons for the non-conservativeness are purely technical: removal of the highest 1% of electric

  11. Estimating proportions of objects from multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Lewis, J. T.; Pentland, A. P.

    1975-01-01

    Progress is reported in developing and testing methods of estimating, from multispectral scanner data, proportions of target classes in a scene when there are a significiant number of boundary pixels. Procedures were developed to exploit: (1) prior information concerning the number of object classes normally occurring in a pixel, and (2) spectral information extracted from signals of adjoining pixels. Two algorithms, LIMMIX and nine-point mixtures, are described along with supporting processing techniques. An important by-product of the procedures, in contrast to the previous method, is that they are often appropriate when the number of spectral bands is small. Preliminary tests on LANDSAT data sets, where target classes were (1) lakes and ponds, and (2) agricultural crops were encouraging.

  12. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    NASA Astrophysics Data System (ADS)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  13. The hockey-stick method to estimate evening dim light melatonin onset (DLMO) in humans.

    PubMed

    Danilenko, Konstantin V; Verevkin, Evgeniy G; Antyufeev, Viktor S; Wirz-Justice, Anna; Cajochen, Christian

    2014-04-01

    The onset of melatonin secretion in the evening is the most reliable and most widely used index of circadian timing in humans. Saliva (or plasma) is usually sampled every 0.5-1 hours under dim-light conditions in the evening 5-6 hours before usual bedtime to assess the dim-light melatonin onset (DLMO). For many years, attempts have been made to find a reliable objective determination of melatonin onset time either by fixed or dynamic threshold approaches. The here-developed hockey-stick algorithm, used as an interactive computer-based approach, fits the evening melatonin profile by a piecewise linear-parabolic function represented as a straight line switching to the branch of a parabola. The switch point is considered to reliably estimate melatonin rise time. We applied the hockey-stick method to 109 half-hourly melatonin profiles to assess the DLMOs and compared these estimates to visual ratings from three experts in the field. The DLMOs of 103 profiles were considered to be clearly quantifiable. The hockey-stick DLMO estimates were on average 4 minutes earlier than the experts' estimates, with a range of -27 to +13 minutes; in 47% of the cases the difference fell within ±5 minutes, in 98% within -20 to +13 minutes. The raters' and hockey-stick estimates showed poor accordance with DLMOs defined by threshold methods. Thus, the hockey-stick algorithm is a reliable objective method to estimate melatonin rise time, which does not depend on a threshold value and is free from errors arising from differences in subjective circadian phase estimates. The method is available as a computerized program that can be easily used in research settings and clinical practice either for salivary or plasma melatonin values.

  14. Translucency thresholds for dental materials.

    PubMed

    Salas, Marianne; Lucena, Cristina; Herrera, Luis Javier; Yebra, Ana; Della Bona, Alvaro; Pérez, María M

    2018-05-12

    To determine the translucency acceptability and perceptibility thresholds for dental resin composites using CIEDE2000 and CIELAB color difference formulas. A 30-observer panel performed perceptibility and acceptability judgments on 50 pairs of resin composites discs (diameter: 10mm; thickness: 1mm). Disc pair differences for the Translucency Parameter (ΔTP) were calculated using both color difference formulas (ΔTP 00 ranged from 0.11 to 7.98, and ΔTP ab ranged from 0.01 to 12.79). A Takagi-Sugeno-Kang (TSK) Fuzzy Approximation was used as fitting procedure. From the resultant fitting curves, the 95% confidence intervals were estimated and the 50:50% translucency perceptibility and acceptability thresholds (TPT and TAT) were calculated. Differences between thresholds were statistically analyzed using Student t tests (α=0.05). CIEDE2000 50:50% TPT was 0.62 and TAT was 2.62. Corresponding CIELAB values were 1.33 and 4.43, respectively. Translucency perceptibility and acceptability thresholds were significantly different using both color difference formulas (p=0.01 for TPT and p=0.005 for TAT). CIEDE2000 color difference formula provided a better data fit than CIELAB formula. The visual translucency difference thresholds determined with CIEDE2000 color difference formula can serve as reference values in the selection of resin composites and evaluation of its clinical performance. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  15. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  16. Trazodone Increases the Respiratory Arousal Threshold in Patients with Obstructive Sleep Apnea and a Low Arousal Threshold

    PubMed Central

    Eckert, Danny J.; Malhotra, Atul; Wellman, Andrew; White, David P.

    2014-01-01

    Study Objectives: The effect of common sedatives on upper airway physiology and breathing during sleep in obstructive sleep apnea (OSA) has been minimally studied. Conceptually, certain sedatives may worsen OSA in some patients. However, sleep and breathing could improve with certain sedatives in patients with OSA with a low respiratory arousal threshold. This study aimed to test the hypothesis that trazodone increases the respiratory arousal threshold in patients with OSA and a low arousal threshold. Secondary aims were to examine the effects of trazodone on upper airway dilator muscle activity, upper airway collapsibility, and breathing during sleep. Design: Patients were studied on 4 separate nights according to a within-subjects cross-over design. Setting: Sleep physiology laboratory. Patients: Seven patients with OSA and a low respiratory arousal threshold. Interventions: In-laboratory polysomnograms were obtained at baseline and after 100 mg of trazodone was administered, followed by detailed overnight physiology experiments under the same conditions. During physiology studies, continuous positive airway pressure was transiently lowered to measure arousal threshold (negative epiglottic pressure prior to arousal), dilator muscle activity (genioglossus and tensor palatini), and upper airway collapsibility (Pcrit). Measurements and Results: Trazodone increased the respiratory arousal threshold by 32 ± 6% (-11.5 ± 1.4 versus -15.3 ± 2.2 cmH2O, P < 0.01) but did not alter the apnea-hypopnea index (39 ± 12 versus 39 ± 11 events/h sleep, P = 0.94). Dilator muscle activity and Pcrit also did not systematically change with trazodone. Conclusions: Trazodone increases the respiratory arousal threshold in patients with obstructive sleep apnea and a low arousal threshold without major impairment in dilator muscle activity or upper airway collapsibility. However, the magnitude of change in arousal threshold was insufficient to overcome the compromised upper airway

  17. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method

    PubMed Central

    Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.

    2016-01-01

    Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227

  18. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  19. A study of the threshold method utilizing raingage data

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David

    1993-01-01

    The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.

  20. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  1. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  2. Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold

    NASA Technical Reports Server (NTRS)

    Srivastava, M. K.; Temkin, A.

    1991-01-01

    The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.

  3. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  4. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  5. Estimating Drought Thresholds for Wheat in the Canadian Prairies Using Remote Sensing Products

    NASA Astrophysics Data System (ADS)

    Munoz Hernandez, A.

    2013-12-01

    Droughts affect millions of people around the world, and depending on their duration and intensity, crops, cattle, and ecosystems can be decimated. One of the most susceptible economic sectors to drought is agriculture. Planners in the agricultural sector understand that drought conditions translate into lower yields, and subsequently reduced profits, but the relationship between drought thresholds and economic impacts remain unclear. This project focuses on estimating the Standardized Precipitation Index (SPI) for the Palliser Triangle to develop an understanding of the relationship between droughts and economic impacts on the production of wheat. The Palliser Triangle is a semi-arid region that experiences severe episodic droughts and is located primarily within two provinces: Alberta and Saskatchewan. The region supports a variety of crops including grains, oilseed, and forage crops, but predominantly wheat. The SPI is a probability index based entirely on precipitation deficits that identifies drought conditions with negative values and wet conditions using positive values. For this project, the SPI was estimated on a monthly basis for a period of thirty-four years utilizing precipitation data from the North American Land Data Assimilation Systems (NDLAS) with a resolution of 1/8 degrees. Agricultural data was collected from Statistics Canada, Agriculture Division on a yearly basis for each agricultural district located within the study area. The SPI estimated values were compared against the yield reduction of wheat for a period of thirty years using statistical linear regression. The combination of highest r-squared and lowest standard error was selected. The use of remote sensing products in Canada is optimal since the in-situ measurement networks are very sparse. However, selecting the appropriate satellite products is challenging. The Tropical Rainfall Measuring Mission (TRMM) has been successfully used to improve the understanding of precipitation within

  6. Extended high-frequency thresholds in college students: effects of music player use and other recreational noise.

    PubMed

    Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K

    2013-09-01

    Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American

  7. Assessing the Electrode-Neuron Interface with the Electrically Evoked Compound Action Potential, Electrode Position, and Behavioral Thresholds.

    PubMed

    DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg

    2016-06-01

    Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.

  8. Threshold analysis of reimbursing physicians for the application of fluoride varnish in young children.

    PubMed

    Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L

    2013-01-01

    Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.

  9. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  10. Rational use of Xpert testing in patients with presumptive TB: clinicians should be encouraged to use the test-treat threshold.

    PubMed

    Decroo, Tom; Henríquez-Trujillo, Aquiles R; De Weggheleire, Anja; Lynen, Lutgarde

    2017-10-11

    A recently published Ugandan study on tuberculosis (TB) diagnosis in HIV-positive patients with presumptive smear-negative TB, which showed that out of 90 patients who started TB treatment, 20% (18/90) had a positive Xpert MTB/RIF (Xpert) test, 24% (22/90) had a negative Xpert test, and 56% (50/90) were started without Xpert testing. Although Xpert testing was available, clinicians did not use it systematically. Here we aim to show more objectively the process of clinical decision-making. First, we estimated that pre-test probability of TB, or the prevalence of TB in smear-negative HIV infected patients with signs of presumptive TB in Uganda, was 17%. Second, we argue that the treatment threshold, the probability of disease at which the utility of treating and not treating is the same, and above which treatment should be started, should be determined. In Uganda, the treatment threshold was not yet formally established. In Rwanda, the calculated treatment threshold was 12%. Hence, one could argue that the threshold was reached without even considering additional tests. Still, Xpert testing can be useful when the probability of disease is above the treatment threshold, but only when a negative Xpert result can lower the probability of disease enough to cross the treatment threshold. This occurs when the pre-test probability is lower than the test-treat threshold, the probability of disease at which the utility of testing and the utility of treating without testing is the same. We estimated that the test-treatment threshold was 28%. Finally, to show the effect of the presence or absence of arguments on the probability of TB, we use confirming and excluding power, and a log10 odds scale to combine arguments. If the pre-test probability is above the test-treat threshold, empirical treatment is justified, because even a negative Xpert will not lower the post-test probability below the treatment threshold. However, Xpert testing for the diagnosis of TB should be performed

  11. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID

  12. Dependence of cavitation, chemical effect, and mechanical effect thresholds on ultrasonic frequency.

    PubMed

    Thanh Nguyen, Tam; Asakura, Yoshiyuki; Koda, Shinobu; Yasuda, Keiji

    2017-11-01

    Cavitation, chemical effect, and mechanical effect thresholds were investigated in wide frequency ranges from 22 to 4880kHz. Each threshold was measured in terms of sound pressure at fundamental frequency. Broadband noise emitted from acoustic cavitation bubbles was detected by a hydrophone to determine the cavitation threshold. Potassium iodide oxidation caused by acoustic cavitation was used to quantify the chemical effect threshold. The ultrasonic erosion of aluminum foil was conducted to estimate the mechanical effect threshold. The cavitation, chemical effect, and mechanical effect thresholds increased with increasing frequency. The chemical effect threshold was close to the cavitation threshold for all frequencies. At low frequency below 98kHz, the mechanical effect threshold was nearly equal to the cavitation threshold. However, the mechanical effect threshold was greatly higher than the cavitation threshold at high frequency. In addition, the thresholds of the second harmonic and the first ultraharmonic signals were measured to detect bubble occurrence. The threshold of the second harmonic approximated to the cavitation threshold below 1000kHz. On the other hand, the threshold of the first ultraharmonic was higher than the cavitation threshold below 98kHz and near to the cavitation threshold at high frequency. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The relationship between stereoacuity and stereomotion thresholds.

    PubMed

    Cumming, B G

    1995-01-01

    There are in principle at least two binocular sources of information that could be used to determine the motion of an object towards or away from an observer; such motion produces changes in binocular disparities over time and also generates different image velocities in the two eyes. It has been argued in the past that stereomotion is detected by a mechanism that is independent of that which detects static disparities. More recently this conclusion has been questioned. If stereomotion detection in fact depends upon detecting disparities, there should be a clear correlation between static stereo-detection thresholds and stereomotion thresholds. If the systems are separate, there need be no such correlation. Four types of threshold measurement were performed by means of random-dot stereograms: (1) static stereo detection/discrimination; (2) stereomotion detection in random-dot stereograms (temporally uncorrelated); (3) stereomotion detection in temporally correlated random-dot stereograms; and (4) binocular detection of frontoparallel motion. Three normal subjects and five subjects with unusually high stereoacuities were studied. In addition, two manipulations were performed that altered stereomotion thresholds: changes in mean disparity, and image defocus produced by positive spectacle lenses. Across subjects and conditions, stereomotion thresholds were well correlated with stereo-discrimination thresholds. Stereomotion was poorly correlated with binocular frontoparallel-motion thresholds. These results suggest that stereomotion is detected by means of registering changes in the output of the same disparity detectors that are used to detect static disparities.

  14. Bilevel thresholding of sliced image of sludge floc.

    PubMed

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  15. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  16. Estimating Shape and Micro-Motion Parameter of Rotationally Symmetric Space Objects from the Infrared Signature

    PubMed Central

    Wu, Yabei; Lu, Huanzhang; Zhao, Fei; Zhang, Zhiyong

    2016-01-01

    Shape serves as an important additional feature for space target classification, which is complementary to those made available. Since different shapes lead to different projection functions, the projection property can be regarded as one kind of shape feature. In this work, the problem of estimating the projection function from the infrared signature of the object is addressed. We show that the projection function of any rotationally symmetric object can be approximately represented as a linear combination of some base functions. Based on this fact, the signal model of the emissivity-area product sequence is constructed, which is a particular mathematical function of the linear coefficients and micro-motion parameters. Then, the least square estimator is proposed to estimate the projection function and micro-motion parameters jointly. Experiments validate the effectiveness of the proposed method. PMID:27763500

  17. Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.

    PubMed

    Saiki, Jun; Miyatsuji, Hirofumi

    2009-03-23

    Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.

  18. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation.

    PubMed

    Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S

    2007-11-01

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation

  19. Feature space trajectory for distorted-object classification and pose estimation in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Shenoy, Rajesh

    1997-10-01

    Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.

  20. Energy thresholds of discrete breathers in thermal equilibrium and relaxation processes.

    PubMed

    Ming, Yi; Ling, Dong-Bo; Li, Hui-Min; Ding, Ze-Jun

    2017-06-01

    So far, only the energy thresholds of single discrete breathers in nonlinear Hamiltonian systems have been analytically obtained. In this work, the energy thresholds of discrete breathers in thermal equilibrium and the energy thresholds of long-lived discrete breathers which can remain after a long time relaxation are analytically estimated for nonlinear chains. These energy thresholds are size dependent. The energy thresholds of discrete breathers in thermal equilibrium are the same as the previous analytical results for single discrete breathers. The energy thresholds of long-lived discrete breathers in relaxation processes are different from the previous results for single discrete breathers but agree well with the published numerical results known to us. Because real systems are either in thermal equilibrium or in relaxation processes, the obtained results could be important for experimental detection of discrete breathers.

  1. Discriminating the precipitation phase based on different temperature thresholds in the Songhua River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao

    2018-06-01

    Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4

  2. Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.

    PubMed

    Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland

    2014-02-01

    Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.

  3. Impaired hand size estimation in CRPS.

    PubMed

    Peltz, Elena; Seifert, Frank; Lanz, Stefan; Müller, Rüdiger; Maihöfner, Christian

    2011-10-01

    A triad of clinical symptoms, ie, autonomic, motor and sensory dysfunctions, characterizes complex regional pain syndromes (CRPS). Sensory dysfunction comprises sensory loss or spontaneous and stimulus-evoked pain. Furthermore, a disturbance in the body schema may occur. In the present study, patients with CRPS of the upper extremity and healthy controls estimated their hand sizes on the basis of expanded or compressed schematic drawings of hands. In patients with CRPS we found an impairment in accurate hand size estimation; patients estimated their own CRPS-affected hand to be larger than it actually was when measured objectively. Moreover, overestimation correlated significantly with disease duration, neglect score, and increase of two-point-discrimination-thresholds (TPDT) compared to the unaffected hand and to control subjects' estimations. In line with previous functional imaging studies in CRPS patients demonstrating changes in central somatotopic maps, we suggest an involvement of the central nervous system in this disruption of the body schema. Potential cortical areas may be the primary somatosensory and posterior parietal cortices, which have been proposed to play a critical role in integrating visuospatial information. CRPS patients perceive their affected hand to be bigger than it is. The magnitude of this overestimation correlates with disease duration, decreased tactile thresholds, and neglect-score. Suggesting a disrupted body schema as the source of this impairment, our findings corroborate the current assumption of a CNS involvement in CRPS. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  4. Vascular and nerve damage in workers exposed to vibrating tools. The importance of objective measurements of exposure time.

    PubMed

    Gerhardsson, Lars; Balogh, Istvan; Hambert, Per-Arne; Hjortsberg, Ulf; Karlsson, Jan-Erik

    2005-01-01

    The aim of the present study was to compare the development of vibration white fingers (VWF) in workers in relation to different ways of exposure estimation, and their relationship to the standard ISO 5349, annex A. Nineteen vibration exposed (grinding machines) male workers completed a questionnaire followed by a structured interview including questions regarding their estimated hand-held vibration exposure. Neurophysiological tests such as fractionated nerve conduction velocity in hands and arms, vibrotactile perception thresholds and temperature thresholds were determined. The subjective estimation of the mean daily exposure-time to vibrating tools was 192 min (range 18-480 min) among the workers. The estimated mean exposure time calculated from the consumption of grinding wheels was 42 min (range 18-60 min), approximately a four-fold overestimation (Wilcoxon's signed ranks test, p<0.001). Thus, objective measurements of the exposure time, related to the standard ISO 5349, which in this case were based on the consumption of grinding wheels, will in most cases give a better basis for adequate risk assessment than self-exposure assessment.

  5. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  6. Objectives, Budgets, Thresholds, and Opportunity Costs-A Health Economics Approach: An ISPOR Special Task Force Report [4].

    PubMed

    Danzon, Patricia M; Drummond, Michael F; Towse, Adrian; Pauly, Mark V

    2018-02-01

    The fourth section of our Special Task Force report focuses on a health plan or payer's technology adoption or reimbursement decision, given the array of technologies, on the basis of their different values and costs. We discuss the role of budgets, thresholds, opportunity costs, and affordability in making decisions. First, we discuss the use of budgets and thresholds in private and public health plans, their interdependence, and connection to opportunity cost. Essentially, each payer should adopt a decision rule about what is good value for money given their budget; consistent use of a cost-per-quality-adjusted life-year threshold will ensure the maximum health gain for the budget. In the United States, different public and private insurance programs could use different thresholds, reflecting the differing generosity of their budgets and implying different levels of access to technologies. In addition, different insurance plans could consider different additional elements to the quality-adjusted life-year metric discussed elsewhere in our Special Task Force report. We then define affordability and discuss approaches to deal with it, including consideration of disinvestment and related adjustment costs, the impact of delaying new technologies, and comparative cost effectiveness of technologies. Over time, the availability of new technologies may increase the amount that populations want to spend on health care. We then discuss potential modifiers to thresholds, including uncertainty about the evidence used in the decision-making process. This article concludes by discussing the application of these concepts in the context of the pluralistic US health care system, as well as the "excess burden" of tax-financed public programs versus private programs. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Estimation of the discharges of the multiple water level stations by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi

    2016-04-01

    This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water

  8. THRESHOLD LOGIC.

    DTIC Science & Technology

    synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN

  9. Association between the electromyographic fatigue threshold and ventilatory threshold.

    PubMed

    Camata, T V; Lacerda, T R; Altimari, L R; Bortolloti, H; Fontes, E B; Dantas, J L; Nakamura, F Y; Abrão, T; Chacon-Mikahil, M P T; Moraes, A C

    2009-01-01

    The objective of this study is to verify the coincidence between the occurrence of the electromyographic fatigue threshold (EMGth) and the ventilatory threshold (Vth) in an incremental test in the cyclosimulator, as well as to compare the calculation of the RMS from the EMG signal using different time windows. Thirteen male cyclists (73.7 +/- 12.4 kg and 174.3 +/- 6.2 cm) performed a ramp incremental test (TI) in a cyclosimulator until voluntary exhaustion. Before the start of each TI subjects had the active bipolar electrodes placed over the superficial muscles of the quadriceps femoris (QF) of the right leg: rectus femoris (RF), vastus medialis (VM) and vastus lateralis (VL). The paired student's t test, pearson's correlation coefficient and the analysis method described by Bland and Altman for the determination of the concordance level were used for statistical analysis. The significance level adopted was P < 0.05. Although no significant differences were found between Vth and the EMGth calculated from windows of 2, 5, 10, 30 and 60 seconds in the studied muscles, it is suggested that the EMGth values determined from the calculation of the RMS curve with windows of 5 and 10 seconds seem to be more appropriate for the calculation of the RMS curve and determination of EMGth from visual inspection.

  10. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  11. Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.

    PubMed

    Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M

    2014-03-20

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥ 65 during the months May-October for years 2000-2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  12. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  13. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedam, S.; Archambault, L.; Starkschall, G.

    2007-11-15

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of

  14. Comparison of in-air evoked potential and underwater behavioral hearing thresholds in four bottlenose dolphins (Tursiops truncatus).

    PubMed

    Finneran, James J; Houser, Dorian S

    2006-05-01

    Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.

  15. Objective Diagnosis of Cervical Cancer by Tissue Protein Profile Analysis

    NASA Astrophysics Data System (ADS)

    Patil, Ajeetkumar; Bhat, Sujatha; Rai, Lavanya; Kartha, V. B.; Chidangil, Santhosh

    2011-07-01

    Protein profiles of homogenized normal cervical tissue samples from hysterectomy subjects and cancerous cervical tissues from biopsy samples collected from patients with different stages of cervical cancer were recorded using High Performance Liquid Chromatography coupled with Laser Induced Fluorescence (HPLC-LIF). The Protein profiles were subjected to Principle Component Analysis to derive statistically significant parameters. Diagnosis of sample types were carried out by matching three parameters—scores of factors, squared residuals, and Mahalanobis Distance. ROC and Youden's Index curves for calibration standards were used for objective estimation of the optimum threshold for decision making and performance.

  16. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual

  17. Automatic threshold optimization in nonlinear energy operator based spike detection.

    PubMed

    Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M

    2016-08-01

    In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.

  18. The Sensitivity of Derived Estimates to the Measurement Quality Objectives for Independent Variables

    Treesearch

    Francis A. Roesch

    2005-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Objectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  19. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  20. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  1. Threshold quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.

  2. Threshold Velocity for Saltation Activity in the Taklimakan Desert

    NASA Astrophysics Data System (ADS)

    Yang, Xinghua; He, Qing; Matimin, Ali; Yang, Fan; Huo, Wen; Liu, Xinchun; Zhao, Tianliang; Shen, Shuanghe

    2017-12-01

    The threshold velocity is an indicator of a soil's susceptibility to saltation activity and is also an important parameter in dust emission models. In this study, the saltation activity, atmospheric conditions, and soil conditions were measured from 1 August 2008 to 31 July 2009 in the Taklimakan Desert, China. the threshold velocity was estimated using the Gaussian time fraction equivalence method. At 2 m height, the 1-min averaged threshold velocity varied between 3.5 and 10.9 m/s, with a mean of 5.9 m/s. Threshold velocities varying between 4.5 and 7.5 m/s accounted for about 91.4% of all measurements. The average threshold velocity displayed clear seasonal variations in the following sequence: winter (5.1 m/s) < autumn (5.8 m/s) < spring (6.1 m/s) < summer (6.5 m/s). A regression equation of threshold velocity was established based on the relations between daily mean threshold velocity and air temperature, specific humidity, and soil volumetric moisture content. High or moderate positive correlations were found between threshold velocity and air temperature, specific humidity, and soil volumetric moisture content (air temperature r = 0.75; specific humidity r = 0.59; and soil volumetric moisture content r = 0.55; sample size = 251). In the study area, the observed horizontal dust flux was 4198.0 kg/m during the whole period of observation, while the horizontal dust flux calculated using the threshold velocity from the regression equation was 4675.6 kg/m. The correlation coefficient between the calculated result and the observations was 0.91. These results indicate that atmospheric and soil conditions should not be neglected in parameterization schemes for threshold velocity.

  3. Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds

    PubMed Central

    Harlan, Sharon L.; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B.; Morales Butler, Emmanuel J.; Ruddell, Benjamin L.; Ruddell, Darren M.

    2014-01-01

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide. PMID:24658410

  4. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  5. Using CART to Identify Thresholds and Hierarchies in the Determinants of Funding Decisions.

    PubMed

    Schilling, Chris; Mortimer, Duncan; Dalziel, Kim

    2017-02-01

    There is much interest in understanding decision-making processes that determine funding outcomes for health interventions. We use classification and regression trees (CART) to identify cost-effectiveness thresholds and hierarchies in the determinants of funding decisions. The hierarchical structure of CART is suited to analyzing complex conditional and nonlinear relationships. Our analysis uncovered hierarchies where interventions were grouped according to their type and objective. Cost-effectiveness thresholds varied markedly depending on which group the intervention belonged to: lifestyle-type interventions with a prevention objective had an incremental cost-effectiveness threshold of $2356, suggesting that such interventions need to be close to cost saving or dominant to be funded. For lifestyle-type interventions with a treatment objective, the threshold was much higher at $37,024. Lower down the tree, intervention attributes such as the level of patient contribution and the eligibility for government reimbursement influenced the likelihood of funding within groups of similar interventions. Comparison between our CART models and previously published results demonstrated concurrence with standard regression techniques while providing additional insights regarding the role of the funding environment and the structure of decision-maker preferences.

  6. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  7. Diversity Outbred Mice Identify Population-Based Exposure Thresholds and Genetic Factors that Influence Benzene-Induced Genotoxicity

    PubMed Central

    Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.

    2014-01-01

    Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237

  8. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  9. On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Junran; Flagg, Cody; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne

    2015-12-01

    Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS-NIR, 350-2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400-700 nm) and the short-wavelength infrared (SWIR) area (1100-2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.

  10. Energy Switching Threshold for Climatic Benefits

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cao, L.; Caldeira, K.

    2013-12-01

    Climate change is one of the great challenges facing humanity currently and in the future. Its most severe impacts may still be avoided if efforts are made to transform current energy systems (1). A transition from the global system of high Greenhouse Gas (GHG) emission electricity generation to low GHG emission energy technologies is required to mitigate climate change (2). Natural gas is increasingly seen as a choice for transitions to renewable sources. However, recent researches in energy and climate puzzled about the climate implications of relying more energy on natural gas. On one hand, a shift to natural gas is promoted as climate mitigation because it has lower carbon per unit energy than coal (3). On the other hand, the effect of switching to natural gas on nuclear-power and other renewable energies development may offset benefits from fuel-switching (4). Cheap natural gas is causing both coal plants and nuclear plants to close in the US. The objective of this study is to measure and evaluate the threshold of energy switching for climatic benefits. We hypothesized that the threshold ratio of energy switching for climatic benefits is related to GHGs emission factors of energy technologies, but the relation is not linear. A model was developed to study the fuel switching threshold for greenhouse gas emission reduction, and transition from coal and nuclear electricity generation to natural gas electricity generation was analyzed as a case study. The results showed that: (i) the threshold ratio of multi-energy switching for climatic benefits changes with GHGs emission factors of energy technologies. (ii)The mathematical relation between the threshold ratio of energy switching and GHGs emission factors of energies is a curved surface function. (iii) The analysis of energy switching threshold for climatic benefits can be used for energy and climate policy decision support.

  11. Extending the excluded volume for percolation threshold estimates in polydisperse systems: The binary disk system

    DOE PAGES

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...

    2017-06-01

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  12. The fragmentation threshold and implications for explosive eruptions

    NASA Astrophysics Data System (ADS)

    Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.

    2003-04-01

    fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.

  13. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    PubMed

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  14. A longitudinal study on the ammonia threshold in junior cyclists

    PubMed Central

    Yuan, Y; Chan, K

    2004-01-01

    Objectives: To identify the effect of a one year non-specific training programme on the ammonia threshold of a group of junior cyclists and to correlate ammonia threshold with other common physiological variables. Methods: The cyclists performed tests at three time points (T1, T2, T3) during the year. Follow up tests were conducted every six months after the original test. Ammonia threshold was obtained from a graded exercise with four minute steps. Results: The relatively non-specific one year training programme was effective in inducing an increase in peak VO2 (60.6 (5.9), 65.9 (7.4), and 64.6 (6.5) ml/min/kg at T1, T2, and T3 respectively) and endurance time (18.3 (4.5), 20.1 (5.2), and 27.0 (6.1) minutes at T1, T2, and T3 respectively), but was not effective for the sprint related variables. Ammonia threshold, together with lactate threshold and ventilatory threshold, was not significantly different at the three test times. Only endurance time correlated significantly with ammonia threshold (r  =  0.915, p  =  0.001). Conclusions: The findings suggest that a relatively non-specific one year training programme does not modify the ammonia threshold of junior cyclists. The significant correlation between ammonia threshold and endurance time further confirms that ammonia threshold is a measure of the ability to sustain exercise at submaximal intensities. PMID:15039242

  15. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement

  16. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement

  17. A multi-threshold sampling method for TOF-PET signal processing

    NASA Astrophysics Data System (ADS)

    Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.

    2009-04-01

    As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.

  18. Object Based Building Extraction and Building Period Estimation from Unmanned Aerial Vehicle Data

    NASA Astrophysics Data System (ADS)

    Comert, Resul; Kaplan, Onur

    2018-04-01

    The aim of this study is to examine whether it is possible to estimate the building periods with respect to the building heights in the urban scale seismic performance assessment studies by using the building height retrieved from the unmanned aerial vehicle (UAV) data. For this purpose, a small area, which includes eight residential reinforced concrete buildings, was selected in Eskisehir (Turkey) city center. In this paper, the possibilities of obtaining the building heights that are used in the estimation of building periods from UAV based data, have been investigated. The investigations were carried out in 3 stages; (i) Building boundary extraction with Object Based Image Analysis (OBIA), (ii) height calculation for buildings of interest from nDSM and accuracy assessment with the terrestrial survey. (iii) Estimation of building period using height information. The average difference between the periods estimated according to the heights obtained from field measurements and from the UAV data is 2.86 % and the maximum difference is 13.2 %. Results of this study have shown that the building heights retrieved from the UAV data can be used in the building period estimation in the urban scale vulnerability assessments.

  19. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    PubMed

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  20. Computed radiography utilizing laser-stimulated luminescence: detectability of simulated low-contrast radiographic objects.

    PubMed

    Higashida, Y; Moribe, N; Hirata, Y; Morita, K; Doudanuki, S; Sonoda, Y; Katsuda, N; Hiai, Y; Misumi, W; Matsumoto, M

    1988-01-01

    Threshold contrasts of low-contrast objects with computed radiography (CR) images were compared with those of blue and green emitting screen-film systems by employing the 18-alternative forced choice (18-AFC) procedure. The dependence of the threshold contrast on the incident X-ray exposure and also the object size was studied. The results indicated that the threshold contrasts of CR system were comparable to those of blue and green screen-film systems and decreased with increasing object size, and increased with decreasing incident X-ray exposure. The increase in threshold contrasts was small when the relative incident exposure decreased from 1 to 1/4, and was large when incident exposure was decreased further.

  1. [IR spectral-analysis-based range estimation for an object with small temperature difference from background].

    PubMed

    Fu, Xiao-Ning; Wang, Jie; Yang, Lin

    2013-01-01

    It is a typical passive ranging technology that estimation of distance of an object is based on transmission characteristic of infrared radiation, it is also a hotspot in electro-optic countermeasures. Because of avoiding transmitting energy in the detection, this ranging technology will significantly enhance the penetration capability and infrared conceal capability of the missiles or unmanned aerial vehicles. With the current situation in existing passive ranging system, for overcoming the shortage in ranging an oncoming target object with small temperature difference from background, an improved distance estimation scheme was proposed. This article begins with introducing the concept of signal transfer function, makes clear the working curve of current algorithm, and points out that the estimated distance is not unique due to inherent nonlinearity of the working curve. A new distance calculation algorithm was obtained through nonlinear correction technique. It is a ranging formula by using sensing information at 3-5 and 8-12 microm combined with background temperature and field meteorological conditions. The authors' study has shown that the ranging error could be mainly kept around the level of 10% under the condition of the target and background apparent temperature difference equal to +/- 5 K, and the error in estimating background temperature is no more than +/- 15 K.

  2. Audibility threshold spectrum for prominent discrete tone analysis

    NASA Astrophysics Data System (ADS)

    Kimizuka, Ikuo

    2005-09-01

    To evaluate the annoyance of tonal components in noise emissions, ANSI S1.13 (for general purposes) and/or ISO 7779/ECMA-74 (dedicatedfor IT equipment) state two similar metrics: tone-to-noise ratio (TNR) and prominence ratio(PR). By these or either of these two parameters, noise of question with a sharp spectral peak is analyzed by high resolution FFF and classified as prominent when it exceeds some criterion curve. According to present procedures, however this designation is dependent on only the spectral shape. To resolve this problem, the author proposes a threshold spectrum of human ear audibility. The spectrum is based on the reference threshold of hearing which is defined in ISO 389-7 and/or ISO 226. With this spectrum, one can objectively define whether the noise peak of question is audible or not, by simple comparison of the peak amplitude of noise emission and the corresponding value of threshold. Applying the threshold, one can avoid overkilling or unnecessary action for noise. Such a peak with absolutely low amplitude is not audible.

  3. Subjective versus objective evening chronotypes in bipolar disorder.

    PubMed

    Gershon, Anda; Kaufmann, Christopher N; Depp, Colin A; Miller, Shefali; Do, Dennis; Zeitzer, Jamie M; Ketter, Terence A

    2018-01-01

    Disturbed sleep timing is common in bipolar disorder (BD). However, most research is based upon self-reports. We examined relationships between subjective versus objective assessments of sleep timing in BD patients versus controls. We studied 61 individuals with bipolar I or II disorder and 61 healthy controls. Structured clinical interviews assessed psychiatric diagnoses, and clinician-administered scales assessed current mood symptom severity. For subjective chronotype, we used the Composite Scale of Morningness (CSM) questionnaire, using original and modified (1, ¾, ⅔, and ½ SD below mean CSM score) thresholds to define evening chronotype. Objective chronotype was calculated as the percentage of nights (50%, 66.7%, 75%, or 90% of all nights) with sleep interval midpoints at or before (non-evening chronotype) vs. after (evening chronotype) 04:15:00 (4:15:00a.m.), based on 25-50 days of continuous actigraph data. BD participants and controls differed significantly with respect to CSM mean scores and CSM evening chronotypes using modified, but not original, thresholds. Groups also differed significantly with respect to chronotype based on sleep interval midpoint means, and based on the threshold of 75% of sleep intervals with midpoints after 04:15:00. Subjective and objective chronotypes correlated significantly with one another. Twenty-one consecutive intervals were needed to yield an evening chronotype classification match of ≥ 95% with that made using the 75% of sleep intervals threshold. Limited sample size/generalizability. Subjective and objective chronotype measurements were correlated with one another in participants with BD. Using population-specific thresholds, participants with BD had a later chronotype than controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Estimation of the object orientation and location with the use of MEMS sensors

    NASA Astrophysics Data System (ADS)

    Sawicki, Aleksander; Walendziuk, Wojciech; Idzkowski, Adam

    2015-09-01

    The article presents the implementation of the estimation algorithms of orientation in 3D space and the displacement of an object in a 2D space. Moreover, a general orientation storage methods using Euler angles, quaternion and rotation matrix are presented. The experimental part presents the results of the complementary filter implementation. In the study experimental microprocessor module based on STM32f4 Discovery system and myRIO hardware platform equipped with FPGA were used. The attempt to track an object in two-dimensional space, which are showed in the final part of this article, were made with the use of the equipment mentioned above.

  5. Threshold foraging behavior of baleen whales

    USGS Publications Warehouse

    Piatt, John F.; Methven, David A.

    1992-01-01

    We conducted hydroacoustic surveys for capelin Mallotus villosus in Witless Bay, Newfoundland, Canada, on 61 days during the summers of 1983 to 1985. On 32 of those days in whlch capelin surveys were conducted, we observed a total of 129 baleen whales - Including 93 humpback Megaptera novaeangliae, 31 minke Balaenoptera acutorostrata and 5 fin whales B. phvsalus. Although a few whales were observed when capelin schools were scarce, the majority (96%) of whales were observed when mean daily capelin densities exceeded 5 schools per linear km surveyed (range of means over 3 yr: 0.0 to 14.0 schools km-1). Plots of daily whale abundance (no. h-1 surveyed) vs daily capelin school density (mean no. schools km-1 surveyed) in each summer revealed that baleen whales have a threshold foraging response to capelin density. Thresholds were estimated using a simple itterative step-function model. Foraging thresholds of baleen whales (7.3, 5.0, and 5.8 schools km-1) varied between years in relation to the overall abundance of capelin schools in the study area during summer (means of 7.2, 3.3, and 5.3 schools km-1, respectively).

  6. Variation in the Hearing Threshold in Women during the Menstrual Cycle

    PubMed Central

    Souza, Dayse da Silva; Luckwu, Brunna; Andrade, Wagner Teobaldo Lopes de; Pessoa, Luciane Spinelli de Figueiredo; Nascimento, João Agnaldo do; Rosa, Marine Raquel Diniz da

    2017-01-01

    Introduction  The hormonal changes that occur during the menstrual cycle and their relationship with hearing problems have been studied. However, they have not been well explained. Objective  The objective of our study is to investigate the variation in hearing thresholds in women during the menstrual cycle. Method  We conducted a cohort and longitudinal study. It was composed of 30 volunteers, aged 18–39 years old, of which 20 were women during the phases of the menstrual cycle and 10 were men (control group) who underwent audiometry and impedance exams, to correlate the possible audiological changes in each phase of the menstrual cycle. Results  There were significant changes in hearing thresholds observed during the menstrual cycle phases in the group of women who used hormonal contraceptives and the group who did not use such contraceptives. Improved hearing thresholds were observed in the late follicular phase in the group who did not use hormonal contraceptives and the hearing thresholds at high frequencies were better. Throughout the menstrual cycle phases, the mean variation was 3.6 db HL between weeks in the group who used hormonal contraceptives and 4.09 db HL in the group who did not use them. Conclusions  The present study found that there may be a relationship between hearing changes and hormonal fluctuations during the menstrual cycle based on changes in the hearing thresholds of women. In addition, this study suggests that estrogen has an otoprotective effect on hearing, since the best hearing thresholds were found when estrogen was at its maximum peak. PMID:29018493

  7. Variation in the Hearing Threshold in Women during the Menstrual Cycle.

    PubMed

    Souza, Dayse da Silva; Luckwu, Brunna; Andrade, Wagner Teobaldo Lopes de; Pessoa, Luciane Spinelli de Figueiredo; Nascimento, João Agnaldo do; Rosa, Marine Raquel Diniz da

    2017-10-01

    Introduction  The hormonal changes that occur during the menstrual cycle and their relationship with hearing problems have been studied. However, they have not been well explained. Objective  The objective of our study is to investigate the variation in hearing thresholds in women during the menstrual cycle. Method  We conducted a cohort and longitudinal study. It was composed of 30 volunteers, aged 18-39 years old, of which 20 were women during the phases of the menstrual cycle and 10 were men (control group) who underwent audiometry and impedance exams, to correlate the possible audiological changes in each phase of the menstrual cycle. Results  There were significant changes in hearing thresholds observed during the menstrual cycle phases in the group of women who used hormonal contraceptives and the group who did not use such contraceptives. Improved hearing thresholds were observed in the late follicular phase in the group who did not use hormonal contraceptives and the hearing thresholds at high frequencies were better. Throughout the menstrual cycle phases, the mean variation was 3.6 db HL between weeks in the group who used hormonal contraceptives and 4.09 db HL in the group who did not use them. Conclusions  The present study found that there may be a relationship between hearing changes and hormonal fluctuations during the menstrual cycle based on changes in the hearing thresholds of women. In addition, this study suggests that estrogen has an otoprotective effect on hearing, since the best hearing thresholds were found when estrogen was at its maximum peak.

  8. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  9. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  10. No-threshold dose-response curves for nongenotoxic chemicals: Findings and applications for risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehan, Daniel M.

    2006-01-15

    We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated onmore » a single graph. Response rates from {approx}1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate.« less

  11. Update on the recommended viewing protocol for FAXIL threshold contrast detail detectability test objects used in television fluoroscopy.

    PubMed

    Launders, J H; McArdle, S; Workman, A; Cowen, A R

    1995-01-01

    The significance of varying the viewing conditions that may affect the perceived threshold contrast of X-ray television fluoroscopy systems has been investigated. Factors investigated include the ambient room lighting and the viewing distance. The purpose of this study is to find the optimum viewing protocol with which to measure the threshold detection index. This is a particular problem when trying to compare the image quality of television fluoroscopy systems in different input field sizes. The results show that the viewing distance makes a significant difference to the perceived threshold contrast, whereas the ambient light conditions make no significant difference. Experienced observers were found to be capable of finding the optimum viewing distance for detecting details of each size, in effect using a flexible viewing distance. This allows the results from different field sizes to be normalized to account for both the magnification and the entrance air kerma rate differences, which in turn allow for a direct comparison of performance in different field sizes.

  12. Individual anaerobic threshold estimates maximal lactate steady state in temperate and hot climate.

    PubMed

    De Barros, Cristiano L Monteiro; Mendes, Thiago T; Mortimer, Lucas De Ávila Castro Fleury; Ramos, Guilherme Passos; Garcia, Emerson Silami

    2016-01-01

    The aim of the present study was to compare the power output at the maximal lactate steady state (MLSS) with the power output at the individual anaerobic threshold (IAT) and at the onset of blood lactate accumulation (OBLA) in both temperate (TEMP) (22 °C) and hot (HOT) (40 °C) climates. Eight young active male (23.9±2.4 yr, 75.9±7.3 kg and 47.8±4.9 mL/kg/min) were evaluated on a cycle ergometer and performed a progressive exercise test until fatigue to determine the IAT and OBLA and two to five 30-min exercise tests at constant intensities for determine MLSS at both temperatures. An ANOVA with repeated measures and Dunnett's post-hoc test was performed to compare results of IAT and OBLA to the variables at the MLSS in both climates with MLSS being considered as the standard. At TEMP there was no difference between the power output at MLSS and IAT (180±11 W and 182±13 W, respectively), however, the intensity of the OBLA (154±11 W) was lower than MLSS (P<0.05). At HOT there was no difference between the power output at MLSS, IAT, and OBLA (148±11 W, 155±12 W and 144±11 W, respectively). These results showed that IAT is sensitive enough to estimate MLSS in both TEMP and HOT climate.

  13. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  14. Determination of Foraging Thresholds and Effects of Application on Energetic Carrying Capacity for Waterfowl

    PubMed Central

    2015-01-01

    Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and

  15. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small imagemore » patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.« less

  16. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    2017-03-01

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  17. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  18. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  19. Cochlear neuropathy and the coding of supra-threshold sound.

    PubMed

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  20. Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.

    PubMed

    Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L

    2017-08-01

    Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.

  1. CARA Risk Assessment Thresholds

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  2. Age structure and mortality of walleyes in Kansas reservoirs: Use of mortality caps to establish realistic management objectives

    USGS Publications Warehouse

    Quist, M.C.; Stephen, J.L.; Guy, C.S.; Schultz, R.D.

    2004-01-01

    Age structure, total annual mortality, and mortality caps (maximum mortality thresholds established by managers) were investigated for walleye Sander vitreus (formerly Stizostedion vitreum) populations sampled from eight Kansas reservoirs during 1991-1999. We assessed age structure by examining the relative frequency of different ages in the population; total annual mortality of age-2 and older walleyes was estimated by use of a weighted catch curve. To evaluate the utility of mortality caps, we modeled threshold values of mortality by varying growth rates and management objectives. Estimated mortality thresholds were then compared with observed growth and mortality rates. The maximum age of walleyes varied from 5 to 11 years across reservoirs. Age structure was dominated (???72%) by walleyes age 3 and younger in all reservoirs, corresponding to ages that were not yet vulnerable to harvest. Total annual mortality rates varied from 40.7% to 59.5% across reservoirs and averaged 51.1% overall (SE = 2.3). Analysis of mortality caps indicated that a management objective of 500 mm for the mean length of walleyes harvested by anglers was realistic for all reservoirs with a 457-mm minimum length limit but not for those with a 381-mm minimum length limit. For a 500-mm mean length objective to be realized for reservoirs with a 381-mm length limit, managers must either reduce mortality rates (e.g., through restrictive harvest regulations) or increase growth of walleyes. When the assumed objective was to maintain the mean length of harvested walleyes at current levels, the observed annual mortality rates were below the mortality cap for all reservoirs except one. Mortality caps also provided insight on management objectives expressed in terms of proportional stock density (PSD). Results indicated that a PSD objective of 20-40 was realistic for most reservoirs. This study provides important walleye mortality information that can be used for monitoring or for inclusion into

  3. Identification of a Hemolysis Threshold That Increases Plasma and Serum Zinc Concentration.

    PubMed

    Killilea, David W; Rohner, Fabian; Ghosh, Shibani; Otoo, Gloria E; Smith, Lauren; Siekmann, Jonathan H; King, Janet C

    2017-06-01

    Background: Plasma or serum zinc concentration (PZC or SZC) is the primary measure of zinc status, but accurate sampling requires controlling for hemolysis to prevent leakage of zinc from erythrocytes. It is not established how much hemolysis can occur without changing PZC/SZC concentrations. Objective: This study determines a guideline for the level of hemolysis that can significantly elevate PZC/SZC. Methods: The effect of hemolysis on PZC/SZC was estimated by using standard hematologic variables and mineral content. The calculated hemolysis threshold was then compared with results from an in vitro study and a population survey. Hemolysis was assessed by hemoglobin and iron concentrations, direct spectrophotometry, and visual assessment of the plasma or serum. Zinc and iron concentrations were determined by inductively coupled plasma spectrometry. Results: A 5% increase in PZC/SZC was calculated to result from the lysis of 1.15% of the erythrocytes in whole blood, corresponding to ∼1 g hemoglobin/L added into the plasma or serum. Similarly, the addition of simulated hemolysate to control plasma in vitro caused a 5% increase in PZC when hemoglobin concentrations reached 1.18 ± 0.10 g/L. In addition, serum samples from a population nutritional survey were scored for hemolysis and analyzed for changes in SZC; samples with hemolysis in the range of 1-2.5 g hemoglobin/L showed an estimated increase in SZC of 6% compared with nonhemolyzed samples. Each approach indicated that a 5% increase in PZC/SZC occurs at ∼1 g hemoglobin/L in plasma or serum. This concentration of hemoglobin can be readily identified directly by chemical hemoglobin assays or indirectly by direct spectrophotometry or matching to a color scale. Conclusions: A threshold of 1 g hemoglobin/L is recommended for PZC/SZC measurements to avoid increases in zinc caused by hemolysis. The use of this threshold may improve zinc assessment for monitoring zinc status and nutritional interventions.

  4. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  5. An objective estimate of energy intake during weight gain using the intake-balance method123

    PubMed Central

    Gilmore, L Anne; Ravussin, Eric; Bray, George A; Han, Hongmei; Redman, Leanne M

    2014-01-01

    Background: Estimates of energy intake (EI) in humans have limited validity. Objective: The objective was to test the accuracy and precision of the intake-balance method to estimate EI during weight gain induced by overfeeding. Design: In 2 studies of controlled overfeeding (1 inpatient study and 1 outpatient study), baseline energy requirements were determined by a doubly labeled water study and caloric titration to weight maintenance. Overfeeding was prescribed as 140% of baseline energy requirements for 56 d. Changes in weight, fat mass (FM), and fat-free mass (FFM) were used to estimate change in energy stores (ΔES). Overfeeding EI was estimated as the sum of baseline energy requirements, thermic effect of food, and ΔES. The estimated overfeeding EI was then compared with the actual EI consumed in the metabolic chamber during the last week of overfeeding. Results: In inpatient individuals, calculated EI during overfeeding determined from ΔES in FM and FFM was (mean ± SD) 3461 ± 848 kcal/d, which was not significantly (−29 ± 273 kcal/d or 0.8%; limits of agreement: −564, 505 kcal/d; P = 0.78) different from the actual EI provided (3490 ± 729 kcal/d). Estimated EI determined from ΔES in weight closely estimated actual intake (−7 ± 193 kcal/d or 0.2%; limits of agreement: −386, 370 kcal/d; P = 0.9). In free-living individuals, estimated EI during overfeeding determined from ΔES in FM and FFM was 4123 ± 500 kcal/d and underestimated actual EI (4286 ± 488 kcal/d; −162 ± 301 kcal or 3.8%; limits of agreement: −751, 427 kcal/d; P = 0.003). Estimated EI determined from ΔES in weight also underestimated actual intake (−159 ± 270 kcal/d or 3.7%; limits of agreement: −688, 370 kcal/d; P = 0.001). Conclusion: The intake-balance method can be used to estimate EI during a period of weight gain as a result of 40% overfeeding in individuals who are inpatients or free-living with only a slight underestimate of actual EI by 0.2–3.8%. This trial

  6. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  7. Climate change, population immunity, and hyperendemicity in the transmission threshold of dengue.

    PubMed

    Oki, Mika; Yamamoto, Taro

    2012-01-01

    It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R(0)), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R(0) value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist-a state known as hyperendemicity-and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R(0). We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas.

  8. On the prediction of threshold friction velocity of wind erosion using soil reflectance spectroscopy

    USGS Publications Warehouse

    Li, Junran; Flagg, Cody B.; Okin, Gregory S.; Painter, Thomas H.; Dintwe, Kebonye; Belnap, Jayne

    2015-01-01

    Current approaches to estimate threshold friction velocity (TFV) of soil particle movement, including both experimental and empirical methods, suffer from various disadvantages, and they are particularly not effective to estimate TFVs at regional to global scales. Reflectance spectroscopy has been widely used to obtain TFV-related soil properties (e.g., moisture, texture, crust, etc.), however, no studies have attempted to directly relate soil TFV to their spectral reflectance. The objective of this study was to investigate the relationship between soil TFV and soil reflectance in the visible and near infrared (VIS–NIR, 350–2500 nm) spectral region, and to identify the best range of wavelengths or combinations of wavelengths to predict TFV. Threshold friction velocity of 31 soils, along with their reflectance spectra and texture were measured in the Mojave Desert, California and Moab, Utah. A correlation analysis between TFV and soil reflectance identified a number of isolated, narrow spectral domains that largely fell into two spectral regions, the VIS area (400–700 nm) and the short-wavelength infrared (SWIR) area (1100–2500 nm). A partial least squares regression analysis (PLSR) confirmed the significant bands that were identified by correlation analysis. The PLSR further identified the strong relationship between the first-difference transformation and TFV at several narrow regions around 1400, 1900, and 2200 nm. The use of PLSR allowed us to identify a total of 17 key wavelengths in the investigated spectrum range, which may be used as the optimal spectral settings for estimating TFV in the laboratory and field, or mapping of TFV using airborne/satellite sensors.

  9. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  10. Dynamic modelling and parameter estimation of a hydraulic robot manipulator using a multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.

    2017-04-01

    This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.

  11. Detection Thresholds of Falling Snow from Satellite-Borne Active and Passive Sensors

    NASA Technical Reports Server (NTRS)

    Jackson, Gail

    2012-01-01

    Precipitation, including rain and snow, is a critical part of the Earth's energy and hydrology cycles. In order to collect information on the complete global precipitation cycle and to understand the energy budget in terms of precipitation, uniform global estimates of both liquid and frozen precipitation must be collected. Active observations of falling snow are somewhat easier to estimate since the radar will detect the precipitation particles and one only needs to know surface temperature to determine if it is liquid rain or snow. The challenges of estimating falling snow from passive spaceborne observations still exist though progress is being made. While these challenges are still being addressed, knowledge of their impact on expected retrieval results is an important key for understanding falling snow retrieval estimations. Important information to assess falling snow retrievals includes knowing thresholds of detection for active and passive sensors, various sensor channel configurations, snow event system characteristics, snowflake particle assumptions, and surface types. For example, can a lake effect snow system with low (2.5 km) cloud tops having an ice water content (Iwe) at the surface of 0.25 g m-3 and dendrite snowflakes be detected? If this information is known, we can focus retrieval efforts on detectable storms and concentrate advances on achievable results. Here, the focus is to determine thresholds of detection for falling snow for various snow conditions over land and lake surfaces. The analysis relies on simulated Weather Research Forecasting (WRF) simulations of falling snow cases since simulations provide all the information to determine the measurements from space and the ground truth. Results are presented for active radar at Ku, Ka, and W-band and for passive radiometer channels from 10 to 183 GHz (Skofronick-Jackson, et al. submitted to IEEE TGRS, April 2012). The notable results show: (1) the W-Band radar has detection thresholds more

  12. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.

    PubMed

    Green, Adam W; Bailey, Larissa L

    2015-01-01

    Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA) using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica) metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools) using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA) and available information to inform a formal decision process to determine optimal and timely management policies.

  13. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  14. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  15. Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph

    2013-01-01

    There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.

  16. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  17. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  18. Embryotoxic thresholds of mercury: estimates from individual mallard eggs

    USGS Publications Warehouse

    Heinz, G.H.; Hoffman, D.J.

    2003-01-01

    Eighty pairs of mallards (Anas platyrhynchos) were fed an uncontaminated diet until each female had laid 15 eggs. After each female had laid her 15th egg, the pair was randomly assigned to a control diet or diets containing 5, 10, or 20 ?g/g mercury as methylmercury until she had laid a second set of 15 eggs. There were 20 pairs in each group. After the second set of 15 eggs, the pair was returned to an uncontaminated diet, and the female was permitted to lay another 30 eggs. For those pairs fed the mercury diets, the even-numbered eggs were incubated and the odd-numbered eggs were saved for possible mercury analysis. Mercury in the even-numbered eggs was estimated as the average of what was in the neighboring odd-numbered eggs. Neurological signs of methylmercury poisoning were observed in ducklings that hatched from eggs containing as little as 2.3 ?g/g estimated mercury on a wet-weight basis, and deformities were seen in embryos from eggs containing about 1 ?g/g estimated mercury. Although embryo mortality was seen in eggs estimated to contain as little as 0.74 ?g/g mercury, there were considerable differences in the sensitivity of mallard embryos, especially from different parents, with some embryos surviving as much as 30 or more ?g/g mercury in the egg.

  19. Cool, warm, and heat-pain detection thresholds: testing methods and inferences about anatomic distribution of receptors.

    PubMed

    Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C

    1993-08-01

    We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Identification of a Hemolysis Threshold That Increases Plasma and Serum Zinc Concentration123

    PubMed Central

    Otoo, Gloria E; Smith, Lauren; Siekmann, Jonathan H

    2017-01-01

    Background: Plasma or serum zinc concentration (PZC or SZC) is the primary measure of zinc status, but accurate sampling requires controlling for hemolysis to prevent leakage of zinc from erythrocytes. It is not established how much hemolysis can occur without changing PZC/SZC concentrations. Objective: This study determines a guideline for the level of hemolysis that can significantly elevate PZC/SZC. Methods: The effect of hemolysis on PZC/SZC was estimated by using standard hematologic variables and mineral content. The calculated hemolysis threshold was then compared with results from an in vitro study and a population survey. Hemolysis was assessed by hemoglobin and iron concentrations, direct spectrophotometry, and visual assessment of the plasma or serum. Zinc and iron concentrations were determined by inductively coupled plasma spectrometry. Results: A 5% increase in PZC/SZC was calculated to result from the lysis of 1.15% of the erythrocytes in whole blood, corresponding to ∼1 g hemoglobin/L added into the plasma or serum. Similarly, the addition of simulated hemolysate to control plasma in vitro caused a 5% increase in PZC when hemoglobin concentrations reached 1.18 ± 0.10 g/L. In addition, serum samples from a population nutritional survey were scored for hemolysis and analyzed for changes in SZC; samples with hemolysis in the range of 1–2.5 g hemoglobin/L showed an estimated increase in SZC of 6% compared with nonhemolyzed samples. Each approach indicated that a 5% increase in PZC/SZC occurs at ∼1 g hemoglobin/L in plasma or serum. This concentration of hemoglobin can be readily identified directly by chemical hemoglobin assays or indirectly by direct spectrophotometry or matching to a color scale. Conclusions: A threshold of 1 g hemoglobin/L is recommended for PZC/SZC measurements to avoid increases in zinc caused by hemolysis. The use of this threshold may improve zinc assessment for monitoring zinc status and nutritional interventions. PMID

  1. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  2. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  3. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  4. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  5. A simple method to estimate threshold friction velocity of wind erosion in the field

    USDA-ARS?s Scientific Manuscript database

    Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...

  6. Do Optimal Prognostic Thresholds in Continuous Physiological Variables Really Exist? Analysis of Origin of Apparent Thresholds, with Systematic Review for Peak Oxygen Consumption, Ejection Fraction and BNP

    PubMed Central

    Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.

    2014-01-01

    Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second

  7. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  8. Lactate threshold by muscle electrical impedance in professional rowers

    NASA Astrophysics Data System (ADS)

    Jotta, B.; Coutinho, A. B. B.; Pino, A. V.; Souza, M. N.

    2017-04-01

    Lactate threshold (LT) is one of the physiological parameters usually used in rowing sport training prescription because it indicates the transitions from aerobic to anaerobic metabolism. Assessment of LT is classically based on a series of values of blood lactate concentrations obtained during progressive exercise tests and thus has an invasive aspect. The feasibility of noninvasive LT estimative through bioelectrical impedance spectroscopy (BIS) data collected in thigh muscles during rowing ergometer exercise tests was investigated. Nineteen professional rowers, age 19 (mean) ± 4.8 (standard deviation) yr, height 187.3 ± 6.6 cm, body mass 83 ± 7.7 kg, and training experience of 7 ± 4 yr, were evaluated in a rowing ergometer progressive test with paired measures of blood lactate concentration and BIS in thigh muscles. Bioelectrical impedance data were obtained by using a bipolar method of spectroscopy based on the current response to a voltage step. An electrical model was used to interpret BIS data and to derive parameters that were investigated to estimate LT noninvasively. From the serial blood lactate measurements, LT was also determined through Dmax method (LTDmax). The zero crossing of the second derivative of kinetic of the capacitance electrode (Ce), one of the BIS parameters, was used to estimate LT. The agreement between the LT estimates through BIS (LTBIS) and through Dmax method (LTDmax) was evaluated using Bland-Altman plots, leading to a mean difference between the estimates of just 0.07 W and a Pearson correlation coefficient r = 0.85. This result supports the utilization of the proposed method based on BIS parameters for estimating noninvasively the lactate threshold in rowing.

  9. An object-based approach for areal rainfall estimation and validation of atmospheric models

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Simmer, Clemens

    2010-05-01

    An object-based approach for areal rainfall estimation is applied to pseudo-radar data simulated of a weatherforecast model as well as to real radar volume data. The method aims at an as fully as possible exploitation of three-dimensional radar signals produced by precipitation generating systems during their lifetime to enhance areal rainfall estimation. Therefore tracking of radar-detected precipitation-centroids is performed and rain events are investigated using so-called Integral Radar Volume Descriptors (IRVD) containing relevant information of the underlying precipitation process. Some investigated descriptors are statistical quantities from the radar reflectivities within the boundary of a tracked rain cell like the area mean reflectivity or the compactness of a cell; others evaluate the mean vertical structure during the tracking period at the near surface reflectivity-weighted center of the cell like the mean effective efficiency or the mean echo top height. The stage of evolution of a system is given by the trend in the brightband fraction or related quantities. Furthermore, two descriptors not directly derived from radar data are considered: the mean wind shear and an orographic rainfall amplifier. While in case of pseudo-radar data a model based on a small set of IRVDs alone provides rainfall estimates of high accuracy, the application of such a model to the real world remains within the accuracies achievable with a constant Z-R-relationship. However, a combined model based on single IRVDs and the Marshall-Palmer Z-R-estimator already provides considerable enhancements even though the resolution of the data base used has room for improvement. The mean echo top height, the mean effective efficiency, the empirical standard deviation and the Marshall-Palmer estimator are detected for the final rainfall estimator. High correlations between storm height and rain rates, a shift of the probability distribution to higher values with increasing effective

  10. An adaptive design for updating the threshold value of a continuous biomarker.

    PubMed

    Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2016-11-30

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  11. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  12. A Meta-Analysis of Children's Object-to-Mouth Frequency Data for Estimating Non-Dietary Ingestion Exposure

    EPA Science Inventory

    To improve estimates of non-dietary ingestion in probabilistic exposure modeling, a meta-analysis of children's object-to-mouth frequency was conducted using data from seven available studies representing 438 participants and ~ 1500 h of behavior observation. The analysis repres...

  13. Demand for Colonoscopy in Colorectal Cancer Screening Using a Quantitative Fecal Immunochemical Test and Age/Sex-Specific Thresholds for Test Positivity.

    PubMed

    Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi

    2018-06-01

    Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity

  14. Marsh collapse thresholds for coastal Louisiana estimated using elevation and vegetation index data

    USGS Publications Warehouse

    Couvillion, Brady R.; Beck, Holly

    2013-01-01

    Forecasting marsh collapse in coastal Louisiana as a result of changes in sea-level rise, subsidence, and accretion deficits necessitates an understanding of thresholds beyond which inundation stress impedes marsh survival. The variability in thresholds at which different marsh types cease to occur (i.e., marsh collapse) is not well understood. We utilized remotely sensed imagery, field data, and elevation data to help gain insight into the relationships between vegetation health and inundation. A Normalized Difference Vegetation Index (NDVI) dataset was calculated using remotely sensed data at peak biomass (August) and used as a proxy for vegetation health and productivity. Statistics were calculated for NDVI values by marsh type for intermediate, brackish, and saline marsh in coastal Louisiana. Marsh-type specific NDVI values of 1.5 and 2 standard deviations below the mean were used as upper and lower limits to identify conditions indicative of collapse. As marshes seldom occur beyond these values, they are believed to represent a range within which marsh collapse is likely to occur. Inundation depth was selected as the primary candidate for evaluation of marsh collapse thresholds. Elevation relative to mean water level (MWL) was calculated by subtracting MWL from an elevation dataset compiled from multiple data types including light detection and ranging (lidar) and bathymetry. A polynomial cubic regression was used to examine a random subset of pixels to determine the relationship between elevation (relative to MWL) and NDVI. The marsh collapse uncertainty range values were found by locating the intercept of the regression line with the 1.5 and 2 standard deviations below the mean NDVI value for each marsh type. Results indicate marsh collapse uncertainty ranges of 30.7–35.8 cm below MWL for intermediate marsh, 20–25.6 cm below MWL for brackish marsh, and 16.9–23.5 cm below MWL for saline marsh. These values are thought to represent the ranges of

  15. Defining indoor heat thresholds for health in the UK.

    PubMed

    Anderson, Mindy; Carmichael, Catriona; Murray, Virginia; Dengel, Andy; Swainson, Michael

    2013-05-01

    It has been recognised that as outdoor ambient temperatures increase past a particular threshold, so do mortality/morbidity rates. However, similar thresholds for indoor temperatures have not yet been identified. Due to a warming climate, the non-sustainability of air conditioning as a solution, and the desire for more energy-efficient airtight homes, thresholds for indoor temperature should be defined as a public health issue. The aim of this paper is to outline the need for indoor heat thresholds and to establish if they can be identified. Our objectives include: describing how indoor temperature is measured; highlighting threshold measurements and indices; describing adaptation to heat; summary of the risk of susceptible groups to heat; reviewing the current evidence on the link between sleep, heat and health; exploring current heat and health warning systems and thresholds; exploring the built environment and the risk of overheating; and identifying the gaps in current knowledge and research. A global literature search of key databases was conducted using a pre-defined set of keywords to retrieve peer-reviewed and grey literature. The paper will apply the findings to the context of the UK. A summary of 96 articles, reports, government documents and textbooks were analysed and a gap analysis was conducted. Evidence on the effects of indoor heat on health implies that buildings are modifiers of the effect of climate on health outcomes. Personal exposure and place-based heat studies showed the most significant correlations between indoor heat and health outcomes. However, the data are sparse and inconclusive in terms of identifying evidence-based definitions for thresholds. Further research needs to be conducted in order to provide an evidence base for threshold determination. Indoor and outdoor heat are related but are different in terms of language and measurement. Future collaboration between the health and building sectors is needed to develop a common

  16. Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong

    2018-04-01

    Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.

  17. Climate Change, Population Immunity, and Hyperendemicity in the Transmission Threshold of Dengue

    PubMed Central

    Oki, Mika; Yamamoto, Taro

    2012-01-01

    Background It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R0), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R0 value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist–a state known as hyperendemicity–and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R0. We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. Methods and Findings We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. Conclusions The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas. PMID:23144746

  18. Comparisons between detection threshold and loudness perception for individual cochlear implant channels

    PubMed Central

    Bierer, Julie Arenberg; Nye, Amberly D

    2014-01-01

    Objective The objective of the present study, performed in cochlear implant listeners, was to examine how the level of current required to detect single-channel electrical pulse trains relates to loudness perception on the same channel. The working hypothesis was that channels with relatively high thresholds, when measured with a focused current pattern, interface poorly to the auditory nerve. For such channels a smaller dynamic range between perceptual threshold and the most comfortable loudness would result, in part, from a greater sensitivity to changes in electrical field spread compared to low-threshold channels. The narrower range of comfortable listening levels may have important implications for speech perception. Design Data were collected from eight, adult cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp.). The partial tripolar (pTP) electrode configuration, consisting of one intracochlear active electrode, two flanking electrodes carrying a fraction (σ) of the return current, and an extracochlear ground, was used for stimulation. Single-channel detection thresholds and most comfortable listening levels were acquired using the most focused pTP configuration possible (σ ≥ 0.8) to identify three channels for further testing – those with the highest, median, and lowest thresholds – for each subject. Threshold, equal-loudness contours (at 50% of the monopolar dynamic range), and loudness growth functions were measured for each of these three test channels using various partial tripolar fractions. Results For all test channels, thresholds increased as the electrode configuration became more focused. The rate of increase with the focusing parameter σ was greatest for the high-threshold channel compared to the median- and low-threshold channels. The 50% equal-loudness contours exhibited similar rates of increase in level across test channels and subjects. Additionally, test channels with the highest

  19. Hour-glass ceilings: Work-hour thresholds, gendered health inequities.

    PubMed

    Dinh, Huong; Strazdins, Lyndall; Welsh, Jennifer

    2017-03-01

    Long workhours erode health, which the setting of maximum weekly hours aims to avert. This 48-h limit, and the evidence base to support it, has evolved from a workforce that was largely male, whose time in the labour force was enabled by women's domestic work and care giving. The gender composition of the workforce has now changed, and many women (as well as some men) combine care-giving with paid work, a change viewed as fundamental for gender equality. However, it raises questions on the suitability of the work time limit and the extent it is protective of health. We estimate workhour-mental health thresholds, testing if they vary for men and women due to gendered workloads and constraints on and off the job. Using six waves of data from a nationally representative sample of Australian adults (24-65 years), surveyed in the Household Income Labour Dynamics of Australia Survey (N = 3828 men; 4062 women), our study uses a longitudinal, simultaneous equation approach to address endogeneity. Averaging over the sample, we find an overall threshold of 39 h per week beyond which mental health declines. Separate curves then estimate thresholds for men and women, by high or low care and domestic time constraints, using stratified and pooled samples. We find gendered workhour-health limits (43.5 for men, 38 for women) which widen further once differences in resources on and off the job are considered. Only when time is 'unencumbered' and similar time constraints and contexts are assumed, do gender gaps narrow and thresholds approximate the 48-h limit. Our study reveals limits to contemporary workhour regulation which may be systematically disadvantaging women's health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of

  1. Threshold and non-threshold chemical carcinogens: A survey of the present regulatory landscape.

    PubMed

    Bevan, Ruth J; Harrison, Paul T C

    2017-08-01

    For the proper regulation of a carcinogenic material it is necessary to fully understand its mode of action, and in particular whether it demonstrates a threshold of effect. This paper explores our present understanding of carcinogenicity and the mechanisms underlying the carcinogenic response. The concepts of genotoxic and non-genotoxic and threshold and non-threshold carcinogens are fully described. We provide summary tables of the types of cancer considered to be associated with exposure to a number of carcinogens and the available evidence relating to whether carcinogenicity occurs through a threshold or non-threshold mechanism. In light of these observations we consider how different regulatory bodies approach the question of chemical carcinogenesis, looking in particular at the definitions and methodologies used to derive Occupational Exposure Levels (OELs) for carcinogens. We conclude that unless proper differentiation is made between threshold and non-threshold carcinogens, inappropriate risk management measures may be put in place - and lead also to difficulties in translating carcinogenicity research findings into appropriate health policies. We recommend that clear differentiation between threshold and non-threshold carcinogens should be made by all expert groups and regulatory bodies dealing with carcinogen classification and risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  3. Odor Detection Thresholds in a Population of Older Adults

    PubMed Central

    Schubert, Carla R.; Fischer, Mary E.; Pinto, A. Alex; Klein, Barbara E.K.; Klein, Ronald; Cruickshanks, Karen J.

    2016-01-01

    OBJECTIVE To measure odor detection thresholds and associated nasal and behavioral factors in an older adult population. STUDY DESIGN Cross-sectional cohort study METHODS Odor detection thresholds were obtained using an automated olfactometer on 832 participants, aged 68–99 (mean age 77) years in the 21-year (2013–2016) follow-up visit of the Epidemiology of Hearing Loss Study. RESULTS The mean odor detection threshold (ODT) score was 8.2 (range: 1–13; standard deviation = 2.54), corresponding to a n-butanol concentration of slightly less than 0.03%. Older participants were significantly more likely to have lower (worse) ODT scores than younger participants (p<0.001). There were no significant differences in mean ODT scores between men and women. Older age was significantly associated with worse performance in multivariable regression models and exercising at least once a week was associated with a reduced odds of having a low (≤5) ODT score. Cognitive impairment was also associated with poor performance while a history of allergies or a deviated septum were associated with better performance. CONCLUSION Odor detection threshold scores were worse in older age groups but similar between men and women in this large population of older adults. Regular exercise was associated with better odor detection thresholds adding to the evidence that decline in olfactory function with age may be partly preventable. PMID:28000220

  4. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  5. Thresholding functional connectomes by means of mixture modeling.

    PubMed

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture

  6. Income Eligibility Thresholds, Premium Contributions, and Children's Coverage Outcomes: A Study of CHIP Expansions

    PubMed Central

    Gresenz, Carole Roan; Edgington, Sarah E; Laugesen, Miriam J; Escarce, José J

    2013-01-01

    Objective To understand the effects of Children's Health Insurance Program (CHIP) income eligibility thresholds and premium contribution requirements on health insurance coverage outcomes among children. Data Sources 2002–2009 Annual Social and Economic Supplements of the Current Population Survey linked to data from multiple secondary data sources. Study Design We use a selection correction model to simultaneously estimate program eligibility and coverage outcomes conditional upon eligibility. We simulate the effects of three premium schedules representing a range of generosity levels and the effects of income eligibility thresholds ranging from 200 to 400 percent of the federal poverty line. Principal Findings Premium contribution requirements decrease enrollment in public coverage and increase enrollment in private coverage, with larger effects for greater contribution levels. Our simulation results suggest minimal changes in coverage outcomes from eligibility expansions to higher income families under premium schedules that require more than a modest contribution (medium or high schedules). Conclusions Our simulation results are useful counterpoints to previous research that has estimated the average effect of program expansions as they were implemented without disentangling the effects of premiums or other program features. The sensitivity to premiums observed suggests that although contribution requirements may be effective in reducing crowd-out, they also have the potential, depending on the level of contribution required, to nullify the effects of CHIP expansions entirely. The persistence of uninsurance among children under the range of simulated scenarios points to the importance of Affordable Care Act provisions designed to make the process of obtaining coverage transparent and navigable. PMID:23398477

  7. Robust w-Estimators for Cryo-EM Class Means.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2016-02-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.

  8. Definition of temperature thresholds: the example of the French heat wave warning system.

    PubMed

    Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal

    2013-01-01

    Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.

  9. Thresholds of information leakage for speech security outside meeting rooms.

    PubMed

    Robinson, Matthew; Hopkins, Carl; Worrall, Ken; Jackson, Tim

    2014-09-01

    This paper describes an approach to provide speech security outside meeting rooms where a covert listener might attempt to extract confidential information. Decision-based experiments are used to establish a relationship between an objective measurement of the Speech Transmission Index (STI) and a subjective assessment relating to the threshold of information leakage. This threshold is defined for a specific percentage of English words that are identifiable with a maximum safe vocal effort (e.g., "normal" speech) used by the meeting participants. The results demonstrate that it is possible to quantify an offset that links STI with a specific threshold of information leakage which describes the percentage of words identified. The offsets for male talkers are shown to be approximately 10 dB larger than for female talkers. Hence for speech security it is possible to determine offsets for the threshold of information leakage using male talkers as the "worst case scenario." To define a suitable threshold of information leakage, the results show that a robust definition can be based upon 1%, 2%, or 5% of words identified. For these percentages, results are presented for offset values corresponding to different STI values in a range from 0.1 to 0.3.

  10. Influence of Injury Risk Thresholds on the Performance of an Algorithm to Predict Crashes with Serious Injuries

    PubMed Central

    Bahouth, George; Digges, Kennerly; Schulman, Carl

    2012-01-01

    This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132

  11. Estimation of frequency offset in mobile satellite modems

    NASA Technical Reports Server (NTRS)

    Cowley, W. G.; Rice, M.; Mclean, A. N.

    1993-01-01

    In mobilesat applications, frequency offset on the received signal must be estimated and removed prior to further modem processing. A straightforward method of estimating the carrier frequency offset is to raise the received MPSK signal to the M-th power, and then estimate the location of the peak spectral component. An analysis of the lower signal to noise threshold of this method is carried out for BPSK signals. Predicted thresholds are compared to simulation results. It is shown how the method can be extended to pi/M MPSK signals. A real-time implementation of frequency offset estimation for the Australian mobile satellite system is described.

  12. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Cost-Savings Analysis of Renal Scintigraphy, Stratified by Renal Function Thresholds: Mercaptoacetyltriglycine Versus Diethylene Triamine Penta-Acetic Acid.

    PubMed

    Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J

    2016-07-01

    To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College

  14. INTEGRATED AND FIBER OPTICS: Threshold of photoinduced conversion of the polarization of radiation in lithium niobate optical waveguides

    NASA Astrophysics Data System (ADS)

    Kazanskiĭ, P. G.

    1989-02-01

    A threshold of photoinduced conversion of an ordinary wave into an extraordinary one was discovered for lithium niobate optical waveguides. The threshold intensity of the radiation was determined for waveguides prepared under different conditions. The experimental results were compared with theoretical estimates.

  15. Analytical expression for Risken-Nummedal-Graham-Haken instability threshold in quantum cascade lasers.

    PubMed

    Vukovic, N; Radovanovic, J; Milanovic, V; Boiko, D L

    2016-11-14

    We have obtained a closed-form expression for the threshold of Risken-Nummedal-Graham-Haken (RNGH) multimode instability in a Fabry-Pérot (FP) cavity quantum cascade laser (QCL). This simple analytical expression is a versatile tool that can easily be applied in practical situations which require analysis of QCL dynamic behavior and estimation of its RNGH multimode instability threshold. Our model for a FP cavity laser accounts for the carrier coherence grating and carrier population grating as well as their relaxation due to carrier diffusion. In the model, the RNGH instability threshold is analyzed using a second-order bi-orthogonal perturbation theory and we confirm our analytical solution by a comparison with the numerical simulations. In particular, the model predicts a low RNGH instability threshold in QCLs. This agrees very well with experimental data available in the literature.

  16. Validity of Lactate Thresholds in Inline Speed Skating.

    PubMed

    Hecksteden, Anne; Heinze, Tobias; Faude, Oliver; Kindermann, Wilfried; Meyer, Tim

    2015-09-01

    Lactate thresholds are commonly used as estimates of the highest workload where lactate production and elimination are in equilibrium (maximum lactate steady state [MLSS]). However, because of the high static load on propulsive muscles, lactate kinetics in inline speed skating may differ significantly from other endurance exercise modes. Therefore, the discipline-specific validity of lactate thresholds has to be verified. Sixteen competitive inline-speed skaters (age: 30 ± 10 years; training per week: 10 ± 4 hours) completed an exhaustive stepwise incremental exercise test (start 24 km·h, step duration 3 minutes, increment 2 km·h) to determine individual anaerobic threshold (IAT) and the workload corresponding to a blood lactate concentration of 4 mmol·L (LT4) and 2-5 continuous load tests of (up to) 30 minutes to determine MLSS. The IAT and LT4 correlated significantly with MLSS, and the mean differences were almost negligible (MLSS 29.5 ± 2.5 km·h; IAT 29.2 ± 2.0 km·h; LT4 29.6 ± 2.3 km·h; p > 0.1 for all differences). However, the variability of differences was considerable resulting in 95% limits of agreement in the upper range of values known from other endurance disciplines (2.6 km·h [8.8%] for IAT and 3.1 km·h [10.3%] for LT4). Consequently, IAT and LT4 may be considered as valid estimates of the MLSS in inline speed skating, but verification by means of a constant load test should be considered in cases of doubt or when optimal accuracy is needed (e.g., in elite athletes or scientific studies).

  17. Use of the dispersion ratio in estimating the nonlinear properties of an object of diagnosis

    NASA Technical Reports Server (NTRS)

    Balitskiy, F. Y.; Genkin, M. D.; Ivanova, M. A.; Kobrinskiy, A. A.; Sokolova, A. G.

    1973-01-01

    An experimental investigation for estimating the nonlinearity of a diagnostic object was carried out on a single-stage, spur gear reducer. The linearity of the properties of spur gearing (including the linearity of its mode of operation) was tested. Torsional vibrations of the driven wheel and transverse (to the meshing plane) vibrations of the drive wheel on its support were taken as the two outputs of the object to be analyzed. The results of the investigation showed that the degree of nonlinearity of a reducing gear is essentially connected with its operating mode, so that different mathematical models of it can correspond to different values of the system parameters.

  18. Bioclimatic Thresholds, Thermal Constants and Survival of Mealybug, Phenacoccus solenopsis (Hemiptera: Pseudococcidae) in Response to Constant Temperatures on Hibiscus

    PubMed Central

    Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi

    2013-01-01

    Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597

  19. Effects of moisture content on wind erosion thresholds of biochar

    NASA Astrophysics Data System (ADS)

    Silva, F. C.; Borrego, C.; Keizer, J. J.; Amorim, J. H.; Verheijen, F. G. A.

    2015-12-01

    Biochar, i.e. pyrolysed biomass, as a soil conditioner is gaining increasing attention in research and industry, with guidelines and certifications being developed for biochar production, storage and handling, as well as for application to soils. Adding water to biochar aims to reduce its susceptibility to become air-borne during and after the application to soils, thereby preventing, amongst others, human health issues from inhalation. The Bagnold model has previously been modified to explain the threshold friction velocity of coal particles at different moisture contents, by adding an adhesive effect. However, it is unknown if this model also works for biochar particles. We measured the threshold friction velocities of a range of biochar particles (woody feedstock) under a range of moisture contents by using a wind tunnel, and tested the performance of the modified Bagnold model. Results showed that the threshold friction velocity can be significantly increased by keeping the gravimetric moisture content at or above 15% to promote adhesive effects between the small particles. For the specific biochar of this study, the modified Bagnold model accurately estimated threshold friction velocities of biochar particles up to moisture contents of 10%.

  20. Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene G (Inventor)

    2011-01-01

    An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.

  1. Invited perspectives: Hydrological perspectives on precipitation intensity-duration thresholds for landslide initiation: proposing hydro-meteorological thresholds

    NASA Astrophysics Data System (ADS)

    Bogaard, Thom; Greco, Roberto

    2018-01-01

    Many shallow landslides and debris flows are precipitation initiated. Therefore, regional landslide hazard assessment is often based on empirically derived precipitation intensity-duration (ID) thresholds and landslide inventories. Generally, two features of precipitation events are plotted and labeled with (shallow) landslide occurrence or non-occurrence. Hereafter, a separation line or zone is drawn, mostly in logarithmic space. The practical background of ID is that often only meteorological information is available when analyzing (non-)occurrence of shallow landslides and, at the same time, it could be that precipitation information is a good proxy for both meteorological trigger and hydrological cause. Although applied in many case studies, this approach suffers from many false positives as well as limited physical process understanding. Some first steps towards a more hydrologically based approach have been proposed in the past, but these efforts received limited follow-up.Therefore, the objective of our paper is to (a) critically analyze the concept of precipitation ID thresholds for shallow landslides and debris flows from a hydro-meteorological point of view and (b) propose a trigger-cause conceptual framework for lumped regional hydro-meteorological hazard assessment based on published examples and associated discussion. We discuss the ID thresholds in relation to return periods of precipitation, soil physics, and slope and catchment water balance. With this paper, we aim to contribute to the development of a stronger conceptual model for regional landslide hazard assessment based on physical process understanding and empirical data.

  2. Spatial distribution of threshold wind speeds for dust outbreaks in northeast Asia

    NASA Astrophysics Data System (ADS)

    Kimura, Reiji; Shinoda, Masato

    2010-01-01

    Asian windblown dust events cause human and animal health effects and agricultural damage in dust source areas such as China and Mongolia and cause "yellow sand" events in Japan and Korea. It is desirable to develop an early warning system to help prevent such damage. We used our observations at a Mongolian station together with data from previous studies to model the spatial distribution of threshold wind speeds for dust events in northeast Asia (35°-45°N and 100°-115°E). Using a map of Normalized Difference Vegetation Index (NDVI), we estimated spatial distributions of vegetation cover, roughness length, threshold friction velocity, and threshold wind speed. We also recognized a relationship between NDVI in the dust season and maximum NDVI in the previous year. Thus, it may be possible to predict the threshold wind speed in the next dust season using the maximum NDVI in the previous year.

  3. Evidence for the contribution of a threshold retrieval process to semantic memory.

    PubMed

    Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A

    2017-10-01

    It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.

  4. Estimation of Crack Initiation and Propagation Thresholds of Confined Brittle Coal Specimens Based on Energy Dissipation Theory

    NASA Astrophysics Data System (ADS)

    Ning, Jianguo; Wang, Jun; Jiang, Jinquan; Hu, Shanchao; Jiang, Lishuai; Liu, Xuesheng

    2018-01-01

    A new energy-dissipation method to identify crack initiation and propagation thresholds is introduced. Conventional and cyclic loading-unloading triaxial compression tests and acoustic emission experiments were performed for coal specimens from a 980-m deep mine with different confining pressures of 10, 15, 20, 25, 30, and 35 MPa. Stress-strain relations, acoustic emission patterns, and energy evolution characteristics obtained during the triaxial compression tests were analyzed. The majority of the input energy stored in the coal specimens took the form of elastic strain energy. After the elastic-deformation stage, part of the input energy was consumed by stable crack propagation. However, with an increase in stress levels, unstable crack propagation commenced, and the energy dissipation and coal damage were accelerated. The variation in the pre-peak energy-dissipation ratio was consistent with the coal damage. This new method demonstrates that the crack initiation threshold was proportional to the peak stress ( σ p) for ratios that ranged from 0.4351 to 0.4753 σ p, and the crack damage threshold ranged from 0.8087 to 0.8677 σ p.

  5. Determination of a Testing Threshold for Lumbar Puncture in the Diagnosis of Subarachnoid Hemorrhage after a Negative Head Computed Tomography: A Decision Analysis.

    PubMed

    Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H

    2016-10-01

    The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.

  6. ADAPTIVE THRESHOLD LOGIC.

    DTIC Science & Technology

    The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)

  7. Implementation guide for turbidity threshold sampling: principles, procedures, and analysis

    Treesearch

    Jack Lewis; Rand Eads

    2009-01-01

    Turbidity Threshold Sampling uses real-time turbidity and river stage information to automatically collect water quality samples for estimating suspended sediment loads. The system uses a programmable data logger in conjunction with a stage measurement device, a turbidity sensor, and a pumping sampler. Specialized software enables the user to control the sampling...

  8. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  9. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    PubMed

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the

  10. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed

  11. Multi-thresholds for fault isolation in the presence of uncertainties.

    PubMed

    Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel

    2016-05-01

    Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Threshold Concepts in Biochemistry

    ERIC Educational Resources Information Center

    Loertscher, Jennifer

    2011-01-01

    Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…

  13. Use of LiDAR to define habitat thresholds for forest bird conservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.

    Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.

  14. Use of LiDAR to define habitat thresholds for forest bird conservation

    DOE PAGES

    Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.; ...

    2017-09-01

    Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.

  15. Mitochondrial threshold effects.

    PubMed Central

    Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry

    2003-01-01

    The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494

  16. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    PubMed

    Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  17. Estimating Distribution of Hidden Objects with Drones: From Tennis Balls to Manatees

    PubMed Central

    Martin, Julien; Edwards, Holly H.; Burgess, Matthew A.; Percival, H. Franklin; Fagan, Daniel E.; Gardner, Beth E.; Ortega-Ortiz, Joel G.; Ifju, Peter G.; Evers, Brandon S.; Rambo, Thomas J.

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants. PMID:22761712

  18. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  19. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  20. Threshold concepts in prosthetics.

    PubMed

    Hill, Sophie

    2017-12-01

    Curriculum documents identify key concepts within learning prosthetics. Threshold concepts provide an alternative way of viewing the curriculum, focussing on the ways of thinking and practicing within prosthetics. Threshold concepts can be described as an opening to a different way of viewing a concept. This article forms part of a larger study exploring what students and staff experience as difficult in learning about prosthetics. To explore possible threshold concepts within prosthetics. Qualitative, interpretative phenomenological analysis. Data from 18 students and 8 staff at two universities with undergraduate prosthetics and orthotics programmes were generated through interviews and questionnaires. The data were analysed using an interpretative phenomenological analysis approach. Three possible threshold concepts arose from the data: 'how we walk', 'learning to talk' and 'considering the person'. Three potential threshold concepts in prosthetics are suggested with possible implications for prosthetics education. These possible threshold concepts involve changes in both conceptual and ontological knowledge, integrating into the persona of the individual. This integration occurs through the development of memories associated with procedural concepts that combine with disciplinary concepts. Considering the prosthetics curriculum through the lens of threshold concepts enables a focus on how students learn to become prosthetists. Clinical relevance This study provides new insights into how prosthetists learn. This has implications for curriculum design in prosthetics education.

  1. Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2003-01-01

    A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.

  2. Accelerometer thresholds: Accounting for body mass reduces discrepancies between measures of physical activity for individuals with overweight and obesity.

    PubMed

    Raiber, Lilian; Christensen, Rebecca A G; Jamnik, Veronica K; Kuk, Jennifer L

    2017-01-01

    The objective of this study was to explore whether accelerometer thresholds that are adjusted to account for differences in body mass influence discrepancies between self-report and accelerometer-measured physical activity (PA) volume for individuals with overweight and obesity. We analyzed 6164 adults from the National Health and Nutrition Examination Survey between 2003-2006. Established accelerometer thresholds were adjusted to account for differences in body mass to produce a similar energy expenditure (EE) rate as individuals with normal weight. Moderate-, vigorous-, and moderate- to vigorous-intensity PA (MVPA) durations were measured using established and adjusted accelerometer thresholds and compared with self-report. Durations of self-report were longer than accelerometer-measured MVPA using established thresholds (normal weight: 57.8 ± 2.4 vs 9.0 ± 0.5 min/day, overweight: 56.1 ± 2.7 vs 7.4 ± 0.5 min/day, and obesity: 46.5 ± 2.2 vs 3.7 ± 0.3 min/day). Durations of subjective and objective PA were negatively associated with body mass index (BMI) (P < 0.05). Using adjusted thresholds increased MVPA durations, and reduced discrepancies between accelerometer and self-report measures for overweight and obese groups by 6.0 ± 0.3 min/day and 17.7 ± 0.8 min/day, respectively (P < 0.05). Using accelerometer thresholds that represent equal EE rates across BMI categories reduced the discrepancies between durations of subjective and objective PA for overweight and obese groups. However, accelerometer-measured PA generally remained shorter than durations of self-report within all BMI categories. Further research may be necessary to improve analytical approaches when using objective measures of PA for individuals with overweight or obesity.

  3. Immobilization thresholds of electrofishing relative to fish size

    USGS Publications Warehouse

    Dolan, C.R.; Miranda, L.E.

    2003-01-01

    Fish size and electrical waveforms have frequently been associated with variation in electrofishing effectiveness. Under controlled laboratory conditions, we measured the electrical power required by five electrical waveforms to immobilize eight fish species of diverse sizes and shapes. Fish size was indexed by total body length, surface area, volume, and weight; shape was indexed by the ratio of body length to body depth. Our objectives were to identify immobilization thresholds, elucidate the descriptors of fish size that were best associated with those immobilization thresholds, and determine whether the vulnerability of a species relative to other species remained constant across electrical treatments. The results confirmed that fish size is a key variable controlling the immobilization threshold and further suggested that the size descriptor best related to immobilization is fish volume. The peak power needed to immobilize fish decreased rapidly with increasing fish volume in small fish but decreased slowly for fish larger than 75-100 cm 3. Furthermore, when we controlled for size and shape, different waveforms did not favor particular species, possibly because of the overwhelming effect of body size. Many of the immobilization inconsistencies previously attributed to species might simply represent the effect of disparities in body size.

  4. Application of threshold concepts to ecological management problems: occupancy of Golden Eagles in Denali National Park, Alaska: Chapter 5

    USGS Publications Warehouse

    Eaton, Mitchell J.; Martin, Julien; Nichols, James D.; McIntyre, Carol; McCluskie, Maggie C.; Schmutz, Joel A.; Lubow, Bruce L.; Runge, Michael C.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    In this chapter, we demonstrate the application of the various classes of thresholds, detailed in earlier chapters and elsewhere, via an actual but simplified natural resource management case study. We intend our example to provide the reader with the ability to recognize and apply the theoretical concepts of utility, ecological and decision thresholds to management problems through a formalized decision-analytic process. Our case study concerns the management of human recreational activities in Alaska’s Denali National Park, USA, and the possible impacts of such activities on nesting Golden Eagles, Aquila chrysaetos. Managers desire to allow visitors the greatest amount of access to park lands, provided that eagle nesting-site occupancy is maintained at a level determined to be acceptable by the managers themselves. As these two management objectives are potentially at odds, we treat minimum desired occupancy level as a utility threshold which, then, serves to guide the selection of annual management alternatives in the decision process. As human disturbance is not the only factor influencing eagle occupancy, we model nesting-site dynamics as a function of both disturbance and prey availability. We incorporate uncertainty in these dynamics by considering several hypotheses, including a hypothesis that site occupancy is affected only at a threshold level of prey abundance (i.e., an ecological threshold effect). By considering competing management objectives and accounting for two forms of thresholds in the decision process, we are able to determine the optimal number of annual nesting-site restrictions that will produce the greatest long-term benefits for both eagles and humans. Setting a utility threshold of 75 occupied sites, out of a total of 90 potential nesting sites, the optimization specified a decision threshold at approximately 80 occupied sites. At the point that current occupancy falls below 80 sites, the recommended decision is to begin restricting

  5. The limits of thresholds: silica and the politics of science, 1935 to 1990.

    PubMed Central

    Markowitz, G; Rosner, D

    1995-01-01

    Since the 1930s threshold limit values have been presented as an objectively established measure of US industrial safety. However, there have been important questions raised regarding the adequacy of these thresholds for protecting workers from silicosis. This paper explores the historical debates over silica threshold limit values and the intense political negotiation that accompanied their establishment. In the 1930s and early 1940s, a coalition of business, public health, insurance, and political interests formed in response to a widely perceived "silicosis crisis." Part of the resulting program aimed at containing the crisis was the establishment of threshold limit values. Yet silicosis cases continued to be documented. By the 1960s these cases had become the basis for a number of revisions to the thresholds. In the 1970s, following a National Institute for Occupational Safety and Health recommendation to lower the threshold limit value for silica and to eliminate sand as an abrasive in blasting, industry fought attempts to make the existing values more stringent. This paper traces the process by which threshold limit values became part of a compromise between the health of workers and the economic interests of industry. Images p254-a p256-a p257-a p259-a PMID:7856788

  6. Stochastic resonance investigation of object detection in images

    NASA Astrophysics Data System (ADS)

    Repperger, Daniel W.; Pinkus, Alan R.; Skipper, Julie A.; Schrider, Christina D.

    2007-02-01

    Object detection in images was conducted using a nonlinear means of improving signal to noise ratio termed "stochastic resonance" (SR). In a recent United States patent application, it was shown that arbitrarily large signal to noise ratio gains could be realized when a signal detection problem is cast within the context of a SR filter. Signal-to-noise ratio measures were investigated. For a binary object recognition task (friendly versus hostile), the method was implemented by perturbing the recognition algorithm and subsequently thresholding via a computer simulation. To fairly test the efficacy of the proposed algorithm, a unique database of images has been constructed by modifying two sample library objects by adjusting their brightness, contrast and relative size via commercial software to gradually compromise their saliency to identification. The key to the use of the SR method is to produce a small perturbation in the identification algorithm and then to threshold the results, thus improving the overall system's ability to discern objects. A background discussion of the SR method is presented. A standard test is proposed in which object identification algorithms could be fairly compared against each other with respect to their relative performance.

  7. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    PubMed

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  8. Vitamin D supplementation increases calcium absorption without a threshold effect

    USDA-ARS?s Scientific Manuscript database

    The maximal calcium absorption in response to vitamin D has been proposed as a biomarker for vitamin D sufficiency. Our objective was to determine whether there is a threshold beyond which increasing doses of vitamin D, or concentrations of serum 25-hydroxyvitamin D [25(OH)D], no longer increase cal...

  9. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    NASA Astrophysics Data System (ADS)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  10. Sri Lankan FRAX model and country-specific intervention thresholds.

    PubMed

    Lekamwasam, Sarath

    2013-01-01

    There is a wide variation in fracture probabilities estimated by Asian FRAX models, although the outputs of South Asian models are concordant. Clinicians can choose either fixed or age-specific intervention thresholds when making treatment decisions in postmenopausal women. Cost-effectiveness of such approach, however, needs to be addressed. This study examined suitable fracture probability intervention thresholds (ITs) for Sri Lanka, based on the Sri Lankan FRAX model. Fracture probabilities were estimated using all Asian FRAX models for a postmenopausal woman of BMI 25 kg/m² and has no clinical risk factors apart from a fragility fracture, and they were compared. Age-specific ITs were estimated based on the Sri Lankan FRAX model using the method followed by the National Osteoporosis Guideline Group in the UK. Using the age-specific ITs as the reference standard, suitable fixed ITs were also estimated. Fracture probabilities estimated by different Asian FRAX models varied widely. Japanese and Taiwan models showed higher fracture probabilities while Chinese, Philippine, and Indonesian models gave lower fracture probabilities. Output of remaining FRAX models were generally similar. Age-specific ITs of major osteoporotic fracture probabilities (MOFP) based on the Sri Lankan FRAX model varied from 2.6 to 18% between 50 and 90 years. ITs of hip fracture probabilities (HFP) varied from 0.4 to 6.5% between 50 and 90 years. In finding fixed ITs, MOFP of 11% and HFP of 3.5% gave the lowest misclassification and highest agreement. Sri Lankan FRAX model behaves similar to other Asian FRAX models such as Indian, Singapore-Indian, Thai, and South Korean. Clinicians may use either the fixed or age-specific ITs in making therapeutic decisions in postmenopausal women. The economical aspects of such decisions, however, need to be considered.

  11. Chemical sensing thresholds for mine detection dogs

    NASA Astrophysics Data System (ADS)

    Phelan, James M.; Barnett, James L.

    2002-08-01

    Mine detection dogs have been found to be an effective method to locate buried landmines. The capabilities of the canine olfaction method are from a complex combination of training and inherent capacity of the dog for odor detection. The purpose of this effort was to explore the detection thresholds of a limited group of dogs that were trained specifically for landmine detection. Soils were contaminated with TNT and 2,4-DNT to develop chemical vapor standards to present to the dogs. Soils contained ultra trace levels of TNT and DNT, which produce extremely low vapor levels. Three groups of dogs were presented the headspace vapors from the contaminated soils in work environments for each dog group. One positive sample was placed among several that contained clean soils and, the location and vapor source (strength, type) was frequently changed. The detection thresholds for the dogs were determined from measured and extrapolated dilution of soil chemical residues and, estimated soil vapor values using phase partitioning relationships. The results showed significant variances in dog sensing thresholds, where some dogs could sense the lowest levels and others had trouble with even the highest source. The remarkable ultra-trace levels detectable by the dogs are consistent with the ultra-trace chemical residues derived from buried landmines; however, poor performance may go unnoticed without periodic challenge tests at levels consistent with performance requirements.

  12. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  13. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams

    USGS Publications Warehouse

    Black, R.W.; Moran, P.W.; Frankforter, J.D.

    2011-01-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).

  14. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams.

    PubMed

    Black, Robert W; Moran, Patrick W; Frankforter, Jill D

    2011-04-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.

  15. Rainfall thresholds for the triggering of landslides in Slovenia

    NASA Astrophysics Data System (ADS)

    Peternel, Tina; Jemec Auflič, Mateja; Rosi, Ascanio; Segoni, Samuele; Komac, Marko; Casagli, Nicola

    2017-04-01

    Both at the worldwide level and in Slovenia, precipitation and related phenomena represent one of the most important triggering factors for the occurrence of slope mass movements. In the past decade, extreme rainfall events with a very high amount of precipitation occurs in a relatively short rainfall period have become increasingly important and more frequent, that causing numerous undesirable consequences. Intense rainstorms cause flash floods and mostly trigger shallow landslides and soil slips. On the other hand, the damage of long lasting rainstorms depends on the region's adaptation and its capacity to store or infiltrate excessive water from the rain. The amount and, consequently, the intensity of daily precipitation that can cause floods in the eastern part of Slovenia is a rather common event for the north-western part of the country. Likewise, the effect of rainfall is very dependent on the prior soil moisture, periods of full soil saturation and the creation of drifts in groundwater levels due to the slow melting of snow, growing period, etc. Landslides could be identified and to some extent also prevent with better knowledge of the relation between landslides and rainfall. In this paper the definition of rainfall thresholds for rainfall-induced landslides in Slovenia is presented. The thresholds have been calculated by collecting approximately 900 landslide data and the relative rainfall amounts, which have been collected from 41 rain gauges all over the country. The thresholds have been defined by the (1) use of an existing procedure, characterized by a high degree of objectiveness and (2) software that was developed for a test site with very different geological and climatic characteristics (Tuscany, central Italy). Firstly, a single national threshold has been defined, later the country was divided into four zones, on the basis of major the river basins and a single threshold has been calculated for each of them. Validation of the calculated

  16. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    PubMed

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    PubMed

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  19. Olfactory Impairment in Chronic Rhinosinusitis Using Threshold, Discrimination, and Identification Scores

    PubMed Central

    Kohli, Preeti; Storck, Kristina A.; Schlosser, Rodney J.

    2016-01-01

    Differences in testing modalities and cut-points used to define olfactory dysfunction contribute to the wide variability in estimating the prevalence of olfactory dysfunction in chronic rhinosinusitis (CRS). The aim of this study is to report the prevalence of olfactory impairment using each component of the Sniffin’ Sticks test (threshold, discrimination, identification, and total score) with age-adjusted and ideal cut-points from normative populations. Patients meeting diagnostic criteria for CRS were enrolled from rhinology clinics at a tertiary academic center. Olfaction was assessed using the Sniffin’ Sticks test. The study population consisted of 110 patients. The prevalence of normosmia, hyposmia, and anosmia using total Sniffin’ Sticks score was 41.8%, 20.0%, and 38.2% using age-appropriate cut-points and 20.9%, 40.9%, and 38.2% using ideal cut-points. Olfactory impairment estimates for each dimension mirrored these findings, with threshold yielding the highest values. Threshold, discrimination, and identification were also found to be significantly correlated to each other (P < 0.001). In addition, computed tomography scores, asthma, allergy, and diabetes were found to be associated with olfactory dysfunction. In conclusion, the prevalence of olfactory dysfunction is dependent upon olfactory dimension and if age-adjusted cut-points are used. The method of olfactory testing should be chosen based upon specific clinical and research goals. PMID:27469973

  20. Rainfall Threshold for Flash Flood Early Warning Based on Rational Equation: A Case Study of Zuojiao Watershed in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.

    2017-12-01

    Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.

  1. Aging and the discrimination of object weight.

    PubMed

    Norman, J Farley; Norman, Hideko F; Swindle, Jessica M; Jennings, L RaShae; Bartholomew, Ashley N

    2009-01-01

    A single experiment was carried out to evaluate the ability of younger and older observers to discriminate object weights. A 2-alternative forced-choice variant of the method of constant stimuli was used to obtain difference thresholds for lifted weight for twelve younger (mean age = 21.5 years) and twelve older (mean age = 71.3 years) adults. The standard weight was 100 g, whereas the test weights ranged from 85 to 115 g. The difference thresholds of the older observers were 57.6% higher than those of the younger observers: the average difference thresholds were 10.4% and 6.6% of the standard for the older and younger observers, respectively. The current findings of an age-related deterioration in the ability to discriminate lifted weight extend and disambiguate the results of earlier research.

  2. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  3. A novel threshold criterion in transcranial motor evoked potentials during surgery for gliomas close to the motor pathway.

    PubMed

    Abboud, Tammam; Schaper, Miriam; Dührsen, Lasse; Schwarz, Cindy; Schmidt, Nils Ole; Westphal, Manfred; Martens, Tobias

    2016-10-01

    OBJECTIVE Warning criteria for monitoring of motor evoked potentials (MEP) after direct cortical stimulation during surgery for supratentorial tumors have been well described. However, little is known about the value of MEP after transcranial electrical stimulation (TES) in predicting postoperative motor deficit when monitoring threshold level. The authors aimed to evaluate the feasibility and value of this method in glioma surgery by using a new approach for interpreting changes in threshold level involving contra- and ipsilateral MEP. METHODS Between November 2013 and December 2014, 93 patients underwent TES-MEP monitoring during resection of gliomas located close to central motor pathways but not involving the primary motor cortex. The MEP were elicited by transcranial repetitive anodal train stimulation. Bilateral MEP were continuously evaluated to assess percentage increase of threshold level (minimum voltage needed to evoke a stable motor response from each of the muscles being monitored) from the baseline set before dural opening. An increase in threshold level on the contralateral side (facial, arm, or leg muscles contralateral to the affected hemisphere) of more than 20% beyond the percentage increase on the ipsilateral side (facial, arm, or leg muscles ipsilateral to the affected hemisphere) was considered a significant alteration. Recorded alterations were subsequently correlated with postoperative neurological deterioration and MRI findings. RESULTS TES-MEP could be elicited in all patients, including those with recurrent glioma (31 patients) and preoperative paresis (20 patients). Five of 73 patients without preoperative paresis showed a significant increase in threshold level, and all of them developed new paresis postoperatively (transient in 4 patients and permanent in 1 patient). Eight of 20 patients with preoperative paresis showed a significant increase in threshold level, and all of them developed postoperative neurological deterioration

  4. Histotripsy beyond the “Intrinsic” Cavitation Threshold using Very Short Ultrasound Pulses: “Microtripsy”

    PubMed Central

    Lin, Kuang-Wei; Kim, Yohan; Maxwell, Adam D.; Wang, Tzu-Yin; Hall, Timothy L.; Xu, Zhen; Fowlkes, J. Brian; Cain, Charles A.

    2014-01-01

    Histotripsy produces tissue fractionation through dense energetic bubble clouds generated by short, high-pressure, ultrasound pulses. Conventional histotripsy treatments have used longer pulses from 3 to 10 cycles wherein the lesion-producing bubble cloud generation depends on the pressure-release scattering of very high peak positive shock fronts from previously initiated, sparsely distributed bubbles (the “shock-scattering” mechanism). In our recent work, the peak negative pressure (P−) for generation of dense bubble clouds directly by a single negative half cycle, the “intrinsic threshold,” was measured. In this paper, the dense bubble clouds and resulting lesions (in RBC phantoms and canine tissues) generated by these supra-intrinsic threshold pulses were studied. A 32-element, PZT-8, 500 kHz therapy transducer was used to generate very short (< 2 cycles) histotripsy pulses at a pulse repetition frequency (PRF) of 1 Hz and P− from 24.5 to 80.7 MPa. The results showed that the spatial extent of the histotripsy-induced lesions increased as the applied P− increased, and the sizes of these lesions corresponded well to the estimates of the focal regions above the intrinsic cavitation threshold, at least in the lower pressure regime (P− = 26–35 MPa). The average sizes for the smallest reproducible lesions were approximately 0.9 × 1.7 mm (lateral × axial), significantly smaller than the −6dB beamwidth of the transducer (1.8 × 4.0 mm). These results suggest that, using the intrinsic threshold mechanism, well-confined and microscopic lesions can be precisely generated and their spatial extent can be estimated based on the fraction of the focal region exceeding the intrinsic cavitation threshold. Since the supra-threshold portion of the negative half cycle can be precisely controlled, lesions considerably less than a wavelength are easily produced, hence the term “microtripsy.” PMID:24474132

  5. Threshold factorization redux

    NASA Astrophysics Data System (ADS)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  6. Multisampling suprathreshold perimetry: a comparison with conventional suprathreshold and full-threshold strategies by computer simulation.

    PubMed

    Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David

    2003-06-01

    To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.

  7. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    NASA Astrophysics Data System (ADS)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  8. Nonlinear Dynamic Modeling of Neuron Action Potential Threshold During Synaptically Driven Broadband Intracellular Activity

    PubMed Central

    Roach, Shane M.; Song, Dong; Berger, Theodore W.

    2012-01-01

    Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947

  9. The Relationship between MOC Reflex and Masked Threshold

    PubMed Central

    Garinis, Angela; Werner, Lynne; Abdala, Carolina

    2011-01-01

    Otoacoustic emission (OAE) amplitude can be reduced by acoustic stimulation. This effect is produced by the medial olivocochlear (MOC) reflex. Past studies have shown that the MOC reflex is related to listening in noise and attention. In the present study, the relationship between strength of the contralateral MOC reflex and masked threshold was investigated in 19 adults. Detection thresholds were determined for a 1000-Hz, 300-ms tone presented simultaneously with one repetition of a 300-ms masker in an ongoing train of 300-ms masker bursts at 600-ms intervals. Three masking conditions were tested: 1) broadband noise 2) a fixed-frequency 4-tone complex masker and 3) a random-frequency 4-tone complex masker. Broadband noise was expected to produce energetic masking and the tonal maskers were expected to produce informational masking in some listeners. DPOAEs were recorded at fine frequency interval from 500 to 4000 Hz, with and without contralateral acoustic stimulation. MOC reflex strength was estimated as a reduction in baseline level and a shift in frequency of DPOAE fine-structure maxima near 1000-Hz. MOC reflex and psychophysical testing were completed in separate sessions. Individuals with poorer thresholds in broadband noise and in random-frequency maskers were found to have stronger MOC reflexes. PMID:21878379

  10. Development of threshold values for a seagrass epiphyte ...

    EPA Pesticide Factsheets

    Epiphytes on seagrasses have been studied for more than 50 years, and proposed as an indicator of anthropogenic nutrient enrichment for over 30 years. Epiphytes have been correlated with seagrass declines, causally related to nutrient additions in both field and mesocosm experiments, and have quantifiable impacts on light available to host plants. An extensive review of seagrass epiphyte literature was conducted to determine whether seagrass epiphyte metrics can be used as a biological indicator for nutrient impacts. While a wide variety of epiphyte metrics have been used by authors, epiphyte biomass as biomass per unit seagrass biomass may be the most effective epiphyte indicator. Regression analyses of epiphyte versus seagrass response metrics were used to estimate values representing potential thresholds for environmental concern. Median epiphyte loads associated with 25 and 50% reduction in seagrass biomass, density and productivity are proposed as potential thresholds. Location-specific modifying factors (grazing pressure, seagrass species) that cause variation in response patterns are the greatest challenge to regional scale applicability of threshold values. An extensive review of seagrass epiphyte literature was conducted to determine whether, and under what conditions, seagrass epiphyte metrics could be used as a potential indicator for nutrient impacts in estuarine ecosystems. Location-specific modifying factors (grazing pressure, seagrass speci

  11. Aging deteriorated perception of urge-to-cough without changing cough reflex threshold to citric acid in female never-smokers.

    PubMed

    Ebihara, Satoru; Ebihara, Takae; Kanezaki, Masashi; Gui, Peijun; Yamasaki, Miyako; Arai, Hiroyuki; Kohzuki, Masahiro

    2011-06-28

    The effect of aging on the cognitive aspect of cough has not been studied yet. The purpose of this study is to investigate the aging effect on the perception of urge-to-cough in healthy individuals. Fourteen young, female, healthy never-smokers were recruited via public postings. Twelve elderly female healthy never-smokers were recruited from a nursing home residence. The cough reflex threshold and the urge-to-cough were evaluated by inhalation of citric acid. The cough reflex sensitivities were defined as the lowest concentration of citric acid that elicited two or more coughs (C2) and five or more coughs (C5). The urge-to-cough was evaluated using a modified the Borg scale. There was no significant difference in the cough reflex threshold to citric acid between young and elderly subjects. The urge-to-cough scores at the concentration of C2 and C5 were significantly smaller in the elderly than young subjects. The urge-to-cough log-log slope in elderly subjects (0.73 ± 0.71 point · L/g) was significantly gentler than those of young subjects (1.35 ± 0.53 point · L/g, p < 0.01). There were no significant differences in the urge-to-cough threshold estimated between young and elderly subjects. The cough reflex threshold did not differ between young and elderly subjects whereas cognition of urge-to-cough was significantly decreased in elderly subjects in female never-smokers. Objective monitoring of cough might be important in the elderly people.

  12. Evaluation of a threshold-based model of fatigue in gamma titanium aluminide following impact damage

    NASA Astrophysics Data System (ADS)

    Harding, Trevor Scott

    2000-10-01

    Recent interest in gamma titanium aluminide (gamma-TiAl) for use in gas turbine engine applications has centered on the low density and good elevated temperature strength retention of gamma-TiAl compared to current materials. However, the relatively low ductility and fracture toughness of gamma-TiAl leads to serious concerns regarding its ability to resist impact damage. Furthermore, the limited fatigue crack growth resistance of gamma-TiAl means that the potential for fatigue failures resulting from impact damage is real if a damage tolerant design approach is used. A threshold-based design approach may be required if fatigue crack growth from potential impact sites is to be avoided. The objective of the present research is to examine the feasibility of a threshold-based approach for the design of a gamma-TiAl low-pressure turbine blade subjected to both assembly-related impact damage and foreign object damage. Specimens of three different gamma-TiAl alloys were damaged in such a way as to simulate anticipated impact damage for a turbine blade. Step-loading fatigue tests were conducted at both room temperature and 600°C. In terms of the assembly-related impact damage, the results indicate that there is reasonably good agreement between the threshold-based predictions of the fatigue strength of damaged specimens and the measured data. However, some discrepancies do exist. In the case of very lightly damaged specimens, prediction of the resulting fatigue strength requires that a very conservative small-crack fatigue threshold be used. Consequently, the allowable design conditions are significantly reduced. For severely damaged specimens, an analytical approach found that the potential effects of residual stresses may be related to the discrepancies observed between the threshold-based model and measured fatigue strength data. In the case of foreign object damage, a good correlation was observed between impacts resulting in large cracks and a long-crack threshold

  13. Objects of attention, objects of perception.

    PubMed

    Avrahami, J

    1999-11-01

    Four experiments were conducted, to explore the notion of objects in perception. Taking as a starting point the effects of display content on rapid attention transfer and manipulating curvature, closure, and processing time, a link between objects of attention and objects of perception is proposed. In Experiment 1, a number of parallel, equally spaced, straight lines facilitated attention transfer along the lines, relative to transfer across the lines. In Experiment 2, with curved, closed-contour shapes, no "same-object" facilitation was observed. However, when a longer time interval was provided, in Experiment 3, a same-object advantage started to emerge. In Experiment 4, using the same curved shapes but in a non-speeded distance estimation task, a strong effect of objects was observed. It is argued that attention transfer is facilitated by line tracing but that line tracing is encouraged by objects.

  14. Higgs boson gluon–fusion production at threshold in N 3LO QCD

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2014-09-02

    We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3LO) in perturbative QCD. Furthermore, we present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions.

  15. A study of the high-frequency hearing thresholds of dentistry professionals

    PubMed Central

    Lopes, Andréa Cintra; de Melo, Ana Dolores Passarelli; Santos, Cibele Carmelo

    2012-01-01

    Summary Introduction: In the dentistry practice, dentists are exposed to harmful effects caused by several factors, such as the noise produced by their work instruments. In 1959, the American Dental Association recommended periodical hearing assessments and the use of ear protectors. Aquiring more information regarding dentists', dental nurses', and prosthodontists' hearing abilities is necessary to propose prevention measures and early treatment strategies. Objective: To investigate the auditory thresholds of dentists, dental nurses, and prosthodontists. Method: In this clinical and experimental study, 44 dentists (Group I; GI), 36 dental nurses (Group II; GII), and 28 prosthodontists (Group III; GIII) were included, , with a total of 108 professionals. The procedures that were performed included a specific interview, ear canal inspection, conventional and high-frequency threshold audiometry, a speech reception threshold test, and an acoustic impedance test. Results: In the 3 groups that were tested, the comparison between the mean hearing thresholds provided evidence of worsened hearing ability relative to the increase in frequency. For the tritonal mean at 500 to 2,000 Hz and 3,000 to 6,000 Hz, GIII presented the worst thresholds. For the mean of the high frequencies (9,000 and 16,000 Hz), GII presented the worst thresholds. Conclusion: The conventional hearing threshold evaluation did not demonstrate alterations in the 3 groups that were tested; however, the complementary tests such as high-frequency audiometry provided greater efficacy in the early detection of hearing problems, since this population's hearing loss impaired hearing ability at frequencies that are not tested by the conventional tests. Therefore, we emphasize the need of utilizing high-frequency threshold audiometry in the hearing assessment routine in combination with other audiological tests. PMID:25991940

  16. 40 CFR 68.115 - Threshold determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Threshold determination. 68.115... § 68.115 Threshold determination. (a) A threshold quantity of a regulated substance listed in § 68.130... process exceeds the threshold. (b) For the purposes of determining whether more than a threshold quantity...

  17. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    PubMed

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  18. Calibration and validation of rainfall thresholds for shallow landslide forecasting in Sicily, southern Italy

    NASA Astrophysics Data System (ADS)

    Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.

    2015-01-01

    Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.

  19. [Hypercholesterolemia-related cardiovascular risk: a continuum from a notion of normality, intervention threshold and therapeutic objectives].

    PubMed

    Scheen, A J

    1999-01-01

    There appears to be a continuum between the cardiovascular risk and the level of blood cholesterol, which hinders the definition of normal values, intervention threshold and therapeutic goals. The more and more convincing evidences provided by the "Evidence-Based Medicine" should be confronted to the pharmaco-economical constraints in order to first focus the essential of the efforts and resources on the target population with the highest risk. Unfortunately, the therapeutic strategy concerns a rather high percentage of the population of industrialized countries.

  20. Continuous Seismic Threshold Monitoring

    DTIC Science & Technology

    1992-05-31

    Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic

  1. Sensory Processing Relates to Attachment to Childhood Comfort Objects of College Students

    ERIC Educational Resources Information Center

    Kalpidou, Maria

    2012-01-01

    The author tested the hypothesis that attachment to comfort objects is based on the sensory processing characteristics of the individual. Fifty-two undergraduate students with and without a childhood comfort object reported sensory responses and performed a tactile threshold task. Those with a comfort object described their object and rated their…

  2. Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use.

    PubMed

    Culyer, Anthony J

    2016-10-01

    There is misunderstanding about both the meaning and the role of cost-effectiveness thresholds in policy decision making. This article dissects the main issues by use of a bookshelf metaphor. Its main conclusions are as follows: it must be possible to compare interventions in terms of their impact on a common measure of health; mere effectiveness is not a persuasive case for inclusion in public insurance plans; public health advocates need to address issues of relative effectiveness; a 'first best' benchmark or threshold ratio of health gain to expenditure identifies the least effective intervention that should be included in a public insurance plan; the reciprocal of this ratio - the 'first best' cost-effectiveness threshold - will rise or fall as the health budget rises or falls (ceteris paribus); setting thresholds too high or too low costs lives; failure to set any cost-effectiveness threshold at all also involves avertable deaths and morbidity; the threshold cannot be set independently of the health budget; the threshold can be approached from either the demand side or the supply side - the two are equivalent only in a health-maximising equilibrium; the supply-side approach generates an estimate of a 'second best' cost-effectiveness threshold that is higher than the 'first best'; the second best threshold is the one generally to be preferred in decisions about adding or subtracting interventions in an established public insurance package; multiple thresholds are implied by systems having distinct and separable health budgets; disinvestment involves eliminating effective technologies from the insured bundle; differential weighting of beneficiaries' health gains may affect the threshold; anonymity and identity are factors that may affect the interpretation of the threshold; the true opportunity cost of health care in a community, where the effectiveness of interventions is determined by their impact on health, is not to be measured in money - but in health

  3. Flood return level analysis of Peaks over Threshold series under changing climate

    NASA Astrophysics Data System (ADS)

    Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.

    2016-12-01

    Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.

  4. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  5. Effects of acute hypoxia on the determination of anaerobic threshold using the heart rate-work rate relationships during incremental exercise tests.

    PubMed

    Ozcelik, O; Kelestimur, H

    2004-01-01

    Anaerobic threshold which describes the onset of systematic increase in blood lactate concentration is a widely used concept in clinical and sports medicine. A deflection point between heart rate-work rate has been introduced to determine the anaerobic threshold non-invasively. However, some researchers have consistently reported a heart rate deflection at higher work rates, while others have not. The present study was designed to investigate whether the heart rate deflection point accurately predicts the anaerobic threshold under the condition of acute hypoxia. Eight untrained males performed two incremental exercise tests using an electromagnetically braked cycle ergometer: one breathing room air and one breathing 12 % O2. The anaerobic threshold was estimated using the V-slope method and determined from the increase in blood lactate and the decrease in standard bicarbonate concentration. This threshold was also estimated by in the heart rate-work rate relationship. Not all subjects exhibited a heart rate deflection. Only two subjects in the control and four subjects in the hypoxia groups showed a heart rate deflection. Additionally, the heart rate deflection point overestimated the anaerobic threshold. In conclusion, the heart rate deflection point was not an accurate predictor of anaerobic threshold and acute hypoxia did not systematically affect the heart rate-work rate relationships.

  6. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  7. I. RENAL THRESHOLDS FOR HEMOGLOBIN IN DOGS

    PubMed Central

    Lichty, John A.; Havill, William H.; Whipple, George H.

    1932-01-01

    We use the term "renal threshold for hemoglobin" to indicate the smallest amount of hemoglobin which given intravenously will effect the appearance of recognizable hemoglobin in the urine. The initial renal threshold level for dog hemoglobin is established by the methods employed at an average value of 155 mg. hemoglobin per kilo body weight with maximal values of 210 and minimal of 124. Repeated daily injections of hemoglobin will depress this initial renal threshold level on the average 46 per cent with maximal values of 110 and minimal values of 60 mg. hemoglobin per kilo body weight. This minimal or depression threshold is relatively constant if the injections are continued. Rest periods without injections cause a return of the renal threshold for hemoglobin toward the initial threshold levels—recovery threshold level. Injections of hemoglobin below the initial threshold level but above the minimal or depression threshold will eventually reduce the renal threshold for hemoglobin to its depression threshold level. We believe the depression threshold or minimal renal threshold level due to repeated hemoglobin injections is a little above the glomerular threshold which we assume is the base line threshold for hemoglobin. Our reasons for this belief in the glomerular threshold are given above and in the other papers of this series. PMID:19870016

  8. Estimation of the anaerobic threshold from heart rate variability in an incremental swimming test.

    PubMed

    Di Michele, Rocco; Gatta, Giorgio; Di Leo, Antonino; Cortesi, Matteo; Andina, Francesca; Tam, Enrico; Da Boit, Mariasole; Merni, Franco

    2012-11-01

    This study aimed to evaluate, in swimming, the agreement between the anaerobic threshold (AT) as determined from the analysis of blood lactate concentration ([La]) and from a new method based on the heart rate (HR) variability (HRV). Fourteen high-level swimmers completed an incremental 7 × 200-m front crawl test, during which the HRV was measured continuously and [La] was collected after each step. To individuate the AT, the trends of the high-frequency HRV spectral power (HFPOW) and of the fraction of HFPOW relative to the respiratory sinus arrhythmia (HFPOW-RSA) were analyzed. In all the subjects, an abrupt increase of both HFPOW and HFPOW-RSA was observed and associated with the AT. The AT parameters determined, respectively, from [La] and HFPOW-RSA were similar (p > 0.05) and highly correlated (HR: 182.0 ± 8.1 vs. 181.1 ± 8.2 b·min, r = 0.93, 95% limits of agreement [LoA]: -6.7 to 4.9 b·min; velocity: 1.47 ± 0.11 vs. 1.47 ± 0.11 m·s, r = 0.98, 95% LoA: -0.05 to 0.05 m·s). Instead, the AT HR and velocity obtained from HFPOW (179.2 ± 8.4 b·min; 1.45 ± 0.11 m·s) were correlated to the corresponding parameters determined from [La] (HR: r = 0.84; velocity: r = 0.94) but underestimated them slightly (95% LoA: -11.9 to 6.3 b·min and -0.11 to 0.05 m·s). These results demonstrate that the AT can be assessed from the HRV in swimming, providing an important testing tool for coaches. Furthermore, using the actual respiratory spectral component, rather than the total HF spectral power, allows us to obtain a more accurate estimate of AT parameters.

  9. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  10. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  11. Poverty dynamics, poverty thresholds and mortality: An age-stage Markovian model

    PubMed Central

    Rehkopf, David; Tuljapurkar, Shripad; Horvitz, Carol C.

    2018-01-01

    Recent studies have examined the risk of poverty throughout the life course, but few have considered how transitioning in and out of poverty shape the dynamic heterogeneity and mortality disparities of a cohort at each age. Here we use state-by-age modeling to capture individual heterogeneity in crossing one of three different poverty thresholds (defined as 1×, 2× or 3× the “official” poverty threshold) at each age. We examine age-specific state structure, the remaining life expectancy, its variance, and cohort simulations for those above and below each threshold. Survival and transitioning probabilities are statistically estimated by regression analyses of data from the Health and Retirement Survey RAND data-set, and the National Longitudinal Survey of Youth. Using the results of these regression analyses, we parameterize discrete state, discrete age matrix models. We found that individuals above all three thresholds have higher annual survival than those in poverty, especially for mid-ages to about age 80. The advantage is greatest when we classify individuals based on 1× the “official” poverty threshold. The greatest discrepancy in average remaining life expectancy and its variance between those above and in poverty occurs at mid-ages for all three thresholds. And fewer individuals are in poverty between ages 40-60 for all three thresholds. Our findings are consistent with results based on other data sets, but also suggest that dynamic heterogeneity in poverty and the transience of the poverty state is associated with income-related mortality disparities (less transience, especially of those above poverty, more disparities). This paper applies the approach of age-by-stage matrix models to human demography and individual poverty dynamics. In so doing we extend the literature on individual poverty dynamics across the life course. PMID:29768416

  12. Objective classification of historical tropical cyclone intensity

    NASA Astrophysics Data System (ADS)

    Chenoweth, Michael

    2007-03-01

    Preinstrumental records of historical tropical cyclone activity require objective methods for accurately categorizing tropical cyclone intensity. Here wind force terms and damage reports from newspaper accounts in the Lesser Antilles and Jamaica for the period 1795-1879 are compared with wind speed estimates calculated from barometric pressure data. A total of 95 separate barometric pressure readings and colocated simultaneous wind force descriptors and wind-induced damage reports are compared. The wind speed estimates from barometric pressure data are taken as the most reliable and serve as a standard to compare against other data. Wind-induced damage reports are used to produce an estimated wind speed range using a modified Fujita scale. Wind force terms are compared with the barometric pressure data to determine if a gale, as used in the contemporary newspapers, is consistent with the modern definition of a gale. Results indicate that the modern definition of a gale (the threshold point separating the classification of a tropical depression from a tropical storm) is equivalent to that in contemporary newspaper accounts. Barometric pressure values are consistent with both reported wind force terms and wind damage on land when the location, speed and direction of movement of the tropical cyclone are determined. Damage reports and derived wind force estimates are consistent with other published results. Biases in ships' logbooks are confirmed and wind force terms of gale strength or greater are identified. These results offer a bridge between the earlier noninstrumental records of tropical cyclones and modern records thereby offering a method of consistently classifying storms in the Caribbean region into tropical depressions, tropical storms, nonmajor and major hurricanes.

  13. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  14. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  15. Derivation of critical rainfall thresholds for landslide in Sicily

    NASA Astrophysics Data System (ADS)

    Caracciolo, Domenico; Arnone, Elisa; Noto, Leonardo V.

    2015-04-01

    Rainfall is the primary trigger of shallow landslides that can cause fatalities, damage to properties and economic losses in many areas of the world. For this reason, determining the rainfall amount/intensity responsible for landslide occurrence is important, and may contribute to mitigate the related risk and save lives. Efforts have been made in different countries to investigate triggering conditions in order to define landslide-triggering rainfall thresholds. The rainfall thresholds are generally described by a functional relationship of power in terms of cumulated or intensity event rainfall-duration, whose parameters are estimated empirically from the analysis of historical rainfall events that triggered landslides. The aim of this paper is the derivation of critical rainfall thresholds for landslide occurrence in Sicily, southern Italy, by focusing particularly on the role of the antecedent wet conditions. The creation of the appropriate landslide-rainfall database likely represents one of main efforts in this type of analysis. For this work, historical landslide events occurred in Sicily from 1919 to 2001 were selected from the archive of the Sistema Informativo sulle Catastrofi Idrogeologiche, developed under the project Aree Vulnerabili Italiane. The corresponding triggering precipitations were screened from the raingauges network in Sicily, maintained by the Osservatorio delle Acque - Agenzia Regionale per i Rifiuti e le Acque. In particular, a detailed analysis was carried out to identify and reconstruct the hourly rainfall events that caused the selected landslides. A bootstrapping statistical technique has been used to determine the uncertainties associated with the threshold parameters. The rainfall thresholds at different exceedance probability levels, from 1% to 10%, were defined in terms of cumulated event rainfall, E, and rainfall duration, D. The role of rainfall prior to the damaging events was taken into account by including in the analysis

  16. Solar Radiation Pressure Estimation and Analysis of a GEO Class of High Area-to-Mass Ratio Debris Objects

    NASA Technical Reports Server (NTRS)

    Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene

    2007-01-01

    A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.

  17. Sensory function assessment. A pilot comparison study of touch pressure threshold with texture and tactile discrimination.

    PubMed

    King, P M

    1997-01-01

    The purpose of this study was to determine if a correlation exists between touch-pressure threshold testing and sensory discrimination function, specifically tactile gnosis for texture and object recognition. Twenty-nine patients diagnosed with carpal tunnel syndrome (CTS), as confirmed by electromyography or nerve conduction velocity tests, were administered three sensibility tests: the Semmes-Weinstein monofilament test, a texture discrimination test, and an object identification test. Norms were established for texture and object recognition tests using 100 subjects (50 females and 50 males) with normal touch-pressure thresholds as assessed by the Semmes-Weinstein monofilament test. The CTS patients were grouped into three categories of sensibility as determined by their performance on the Semmes-Weinstein monofilament test: normal, diminished light touch, and diminished protective sensation. Through an independent t test statistical procedure, each of the three categories mean response times for identification of textures of objects were compared with the normed response times. Accurate responses were given for identification of all textures and objects. No significant difference (p < .05) was noted in mean response times of the CTS patients with normal touch-pressure thresholds. A significant difference (p < .05) in response times by those CTS patients with diminished light touch was detected in identification in four out of six objects. Subjects with diminished protective sensation had significantly longer response times (p < .05) for identification of the textures of cork, coarse and fine sandpaper, and rubber. Significantly longer response times were recorded by the same subjects for identification of such objects as a screw and a button, and for the shapes of a square, triangle, and oval.

  18. The effect of condoms on penile vibrotactile sensitivity thresholds in young, heterosexual men

    PubMed Central

    Hill, Brandon J.; Janssen, Erick; Kvam, Peter; Amick, Erick E.; Sanders, Stephanie A.

    2013-01-01

    Introduction Investigating the ways in which barrier methods such as condoms may affect penile sensory thresholds has potential relevance to the development of interventions in men who experience negative effects of condoms on sexual response and sensation. A quantitative, psychophysiological investigation examining the degree to which sensations are altered by condoms has, to date, not been conducted. Aim The objective of this study was to examine penile vibrotactile sensitivity thresholds in both flaccid and erect penises with and without a condom, while comparing men who do and those who do not report condom-associated erection problems (CAEP). Methods Penile vibrotactile sensitivity thresholds were assessed among a total of 141 young, heterosexual men using biothesiometry. An incremental two-step staircase method was used and repeated three times for each of four conditions. Intra-class correlation coefficients (ICC) were calculated for all vibratory assessments. Penile vibratory thresholds were compared using a mixed-model Analysis of Variance (ANOVA). Main Outcome Measures Penile vibrotactile sensitivity thresholds with and without a condom, erectile function measured by International Index of Erectile Function Questionnaire (IIEF), and self-reported degree of erection. Results Significant main effects of condoms (yes/no) and erection (yes/no) were found. No main or interaction effects of CAEP were found. Condoms were associated with higher penile vibrotactile sensitivity thresholds (F(1, 124)=17.11, p<.001). Penile vibrotactile thresholds were higher with an erect than with a flaccid penis (F(1, 124)=4.21, p=.042). Conclusion The current study demonstrates the feasibility of measuring penile vibratory thresholds with and without a condom in both erect and flaccid experimental conditions. As might be expected, condoms increased penile vibrotactile sensitivity thresholds. Interestingly, erections were associated with the highest thresholds. Thus, this study

  19. Ultra-low threshold polariton condensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steger, Mark; Fluegel, Brian; Alberi, Kirstin

    Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.

  20. Ultra-low threshold polariton condensation

    DOE PAGES

    Steger, Mark; Fluegel, Brian; Alberi, Kirstin; ...

    2017-03-13

    Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.

  1. Threshold-free method for three-dimensional segmentation of organelles

    NASA Astrophysics Data System (ADS)

    Chan, Yee-Hung M.; Marshall, Wallace F.

    2012-03-01

    An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.

  2. Evaluation of Sensibility Threshold for Interocclusal Thickness of Patients Wearing Complete Dentures

    PubMed Central

    Shala, Kujtim Sh.; Ahmedi, Enis F.; Tmava-Dragusha, Arlinda

    2017-01-01

    Objective The aim of this study was to evaluate sensibility threshold for interocclusal thickness in experienced and nonexperienced denture wearers after the insertion of new complete dentures. Materials and Methods A total of 88 patients with complete dentures have participated in this study. The research was divided into two experimental groups, compared with the previous experience prosthetic dental treatment. The sensibility threshold for interocclusal thickness was measured with metal foil with 8 μm thickness and width of 8 mm, placed between the upper and lower incisor region. Statistical analysis was performed using standard software package BMDP (biomedical statistical package). Results Results suggest that time of measurement affects the average values of the sensibility threshold for interocclusal thickness (F = 242.68, p = 0.0000). Gender appeared to be a significant factor when it interacted with time measurement resulting in differences in sensibility threshold for interocclusal thickness (gender: F = 9.84, p = 0.018; F = 4.83, p = 0.0003). Conclusion The sensibility threshold for interocclusal thickness was the most important functional adaptation in patient with complete dentures. A unique trait of this indicator is the progressive reduction of initial values and a tendency to reestablish the stationary state in the fifteenth week after dentures is taken off. PMID:28702055

  3. Nut crop yield records show that budbreak-based chilling requirements may not reflect yield decline chill thresholds

    NASA Astrophysics Data System (ADS)

    Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.

    2015-06-01

    Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.

  4. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    PubMed

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  5. Comparison of epicardial adipose tissue radiodensity threshold between contrast and non-contrast enhanced computed tomography scans: A cohort study of derivation and validation.

    PubMed

    Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig

    2018-05-11

    Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.

  6. An objective rationale for the choice of regularisation parameter with application to global multiple-frequency S-wave tomography

    NASA Astrophysics Data System (ADS)

    Zaroli, C.; Sambridge, M.; Lévêque, J.-J.; Debayle, E.; Nolet, G.

    2013-06-01

    In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global Multiple-Frequency Tomography (MFT), using a data set of 287 078 S-wave delay-times measured in five frequency bands (10, 15, 22, 34, 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise. In addition, a sharp change of behaviour of the model ℓ∞-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation. Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models (correlation superior to 98% up to spherical harmonic degree 80). The obtained tomographic model is displayed in mid lower-mantle (660-1910 km depth), and is shown to be compatible with three other recent global shear-velocity models. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of the Earth's mantle.

  7. Estimating relative values for multiple objectives on private forests

    Treesearch

    Donald F. Dennis; Thomas H. Stevens; David B. Kittredge; Mark G. Rickenbach

    2001-01-01

    Conjoint and other techniques were used to examine private forest-land owner's willingness to manage for timber and nontimber objectives. The objectives were to: maintain apple trees to benefit wildlife, protect rare ferns to enhance aesthetics and biodiversity, improve recreational trails, and harvest timber. Ecological objectives were found to be more important...

  8. Determination of minimal steady-state plasma level of diazepam causing seizure threshold elevation in rats.

    PubMed

    Dhir, Ashish; Rogawski, Michael A

    2018-05-01

    Diazepam, administered by the intravenous, oral, or rectal routes, is widely used for the management of acute seizures. Dosage forms for delivery of diazepam by other routes of administration, including intranasal, intramuscular, and transbuccal, are under investigation. In predicting what dosages are necessary to terminate seizures, the minimal exposure required to confer seizure protection must be known. Here we administered diazepam by continuous intravenous infusion to obtain near-steady-state levels, which allowed an assessment of the minimal levels that elevate seizure threshold. The thresholds for various behavioral seizure signs (myoclonic jerk, clonus, and tonus) were determined with the timed intravenous pentylenetetrazol seizure threshold test in rats. Diazepam was administered to freely moving animals by continuous intravenous infusion via an indwelling jugular vein cannula. Blood samples for assay of plasma levels of diazepam and metabolites were recovered via an indwelling cannula in the contralateral jugular vein. The pharmacokinetic parameters of diazepam following a single 80-μg/kg intravenous bolus injection were determined using a noncompartmental pharmacokinetic approach. The derived parameters V d , CL, t 1/2α (distribution half-life) and t 1/2β (terminal half-life) for diazepam were, respectively, 608 mL, 22.1 mL/min, 13.7 minutes, and 76.8 minutes, respectively. Various doses of diazepam were continuously infused without or with an initial loading dose. At the end of the infusions, the thresholds for various behavioral seizure signs were determined. The minimal plasma diazepam concentration associated with threshold elevations was estimated at approximately 70 ng/mL. The active metabolites nordiazepam, oxazepam, and temazepam achieved levels that are expected to make only minor contributions to the threshold elevations. Diazepam elevates seizure threshold at steady-state plasma concentrations lower than previously recognized. The

  9. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    PubMed

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  10. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  11. Robust w-Estimators for Cryo-EM Class Means

    PubMed Central

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  12. Sugar Detection Threshold After Laparoscopic Sleeve Gastrectomy in Adolescents.

    PubMed

    Abdeen, Ghalia N; Miras, Alexander D; Alqhatani, Aayed R; le Roux, Carel W

    2018-05-01

    Obesity in young people is one of the most serious public health problems worldwide. Moreover, the mechanisms preventing obese adolescents from losing and maintaining weight loss have been elusive. Laparoscopic sleeve gastrectomy (LSG) is successful at achieving long-term weight loss in patients across all age groups, including children and adolescents. Anecdotal clinical observation as well as evidence in rodents suggests that LSG induces a shift in preference of sugary foods. However, it is not known whether this shift is due to a change in the threshold for gustatory detection of sucrose, or whether LSG induces behavioral change without affecting the gustatory threshold for sugar. The objective of this study was to determine whether adolescents who undergo LSG experience a change in their threshold for detecting sweet taste. We studied the sucrose detection threshold of 14 obese adolescents (age 15.3 ± 0.5 years, range 12-18) who underwent LSG 2 weeks before surgery and at 12 and 52 weeks after surgery. Matched non-surgical subjects were tested on two occasions 12 weeks apart to control for potential learning of the test that may have confounded the results. Seven sucrose concentrations were used and were tested in eight blocks with each block consisting of a random seven sucrose and seven water stimuli. The subjects were asked to report whether the sample contained water or not after they tasted 15 ml of the fluid for 10 s. The bodyweight of the LSG group decreased from 136.7 ± 5.4 to 109.6 ± 5.1 and 86.5 ± 4.0 kg after 12 and 52 weeks, respectively (p < 0.001). There was no significant difference after surgery in taste detection threshold of patients after LSG (p = 0.60), and no difference was observed comparing the taste detection threshold of the LSG group with the non-surgical controls (p = 0.38). LSG did not affect the taste detection threshold for sucrose, suggesting that the shift in preference for sugary foods may be due to

  13. Objective estimation of tropical cyclone innercore surface wind structure using infrared satellite images

    NASA Astrophysics Data System (ADS)

    Zhang, Changjiang; Dai, Lijie; Ma, Leiming; Qian, Jinfang; Yang, Bo

    2017-10-01

    An objective technique is presented for estimating tropical cyclone (TC) innercore two-dimensional (2-D) surface wind field structure using infrared satellite imagery and machine learning. For a TC with eye, the eye contour is first segmented by a geodesic active contour model, based on which the eye circumference is obtained as the TC eye size. A mathematical model is then established between the eye size and the radius of maximum wind obtained from the past official TC report to derive the 2-D surface wind field within the TC eye. Meanwhile, the composite information about the latitude of TC center, surface maximum wind speed, TC age, and critical wind radii of 34- and 50-kt winds can be combined to build another mathematical model for deriving the innercore wind structure. After that, least squares support vector machine (LSSVM), radial basis function neural network (RBFNN), and linear regression are introduced, respectively, in the two mathematical models, which are then tested with sensitivity experiments on real TC cases. Verification shows that the innercore 2-D surface wind field structure estimated by LSSVM is better than that of RBFNN and linear regression.

  14. Evaluating time dynamics of topographic threshold relations for gully initiation

    NASA Astrophysics Data System (ADS)

    Hayas, Antonio; Vanwalleghem, Tom; Poesen, Jean

    2016-04-01

    Gully erosion is one of the most important soil degradation processes at global scale. However, modelling of gully erosion is still difficult. Despite advances in the modelling of gully headcut rates and incision rates, it remains difficult to predict the location of gully initiation points and trajectories. In different studies it has been demonstrated that a good method of predicting gully initiation is by using a slope (S) - area (A) threshold. Such an S-A relation is a simple way of estimating the critical discharges needed to generate a critical shear stress that can incise a particular soil and initiate a gully. As such, the simple S-A threshold will vary if the rainfall-runoff behaviour of the soil changes or if the soil's erodibility changes. Over the past decades, important agronomic changes have produced significant changes in the soil use and soil management in SW Spain. It is the objective of this research to evaluate how S-A relations for gully initiation have changed over time and for two different land uses, cereal and olive. Data was collected for a gully network in the Cordoba Province, SW Spain. From photo-interpretation of historical air photos between 1956 and 2013, the gully network and initiation points were derived. In total 10 different time steps are available (1956; 1977; 1984; 1998; 2001; 2004; 2006; 2008; 2010; 2013). Topographical thresholds were extracted by combining the digitized gully network with the DEM. Due to small differences in the alignment of ortophotos and DEM, an optimization technique was developed in GIS to extract the correct S-A value for each point. With the S-A values for each year, their dynamics was evaluated as a function of land use (olive or cereal) and in function of the following variables in each of the periods considered: • soil management • soil cover by weeds, where weed growth was modeled from the daily soil water balance • rainfall intensity • root cohesion, , where root growth was modeled from

  15. Quantifying ecological thresholds from response surfaces

    Treesearch

    Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh

    2011-01-01

    Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...

  16. Oil-in-Water Emulsion Exhibits Bitterness-Suppressing Effects in a Sensory Threshold Study.

    PubMed

    Torrico, Damir Dennis; Sae-Eaw, Amporn; Sriwattana, Sujinda; Boeneke, Charles; Prinyawiwatkul, Witoon

    2015-06-01

    Little is known about how emulsion characteristics affect saltiness/bitterness perception. Sensory detection and recognition thresholds of NaCl, caffeine, and KCl in aqueous solution compared with oil-in-water emulsion systems were evaluated. For emulsions, NaCl, KCl, or caffeine were dissolved in water + emulsifier and mixed with canola oil (20% by weight). Two emulsions were prepared: emulsion 1 (viscosity = 257 cP) and emulsion 2 (viscosity = 59 cP). The forced-choice ascending concentration series method of limits (ASTM E-679-04) was used to determine detection and/or recognition thresholds at 25 °C. Group best estimate threshold (GBET) geometric means were expressed as g/100 mL. Comparing NaCl with KCl, there were no significant differences in detection GBET values for all systems (0.0197 - 0.0354). For saltiness recognition thresholds, KCl GBET values were higher compared with NaCl GBET (0.0822 - 0.1070 compared with 0.0471 - 0.0501). For NaCl and KCl, emulsion 1 and/or emulsion 2 did not significantly affect the saltiness recognition threshold compared with that of the aqueous solution. However, the bitterness recognition thresholds of caffeine and KCl in solution were significantly lower than in the emulsions (0.0242 - 0.0586 compared with 0.0754 - 0.1025). Gender generally had a marginal effect on threshold values. This study showed that, compared with the aqueous solutions, emulsions did not significantly affect the saltiness recognition threshold of NaCl and KCl, but exhibited bitterness-suppressing effects on KCl and/or caffeine. © 2015 Institute of Food Technologists®

  17. A common microstructure in behavioral hearing thresholds and stimulus-frequency otoacoustic emissions.

    PubMed

    Dewey, James B; Dhar, Sumitrajit

    2017-11-01

    Behavioral hearing thresholds and otoacoustic emission (OAE) spectra often exhibit quasiperiodic fluctuations with frequency. For behavioral and OAE responses to single tones-the latter referred to as stimulus-frequency otoacoustic emissions (SFOAEs)-this microstructure has been attributed to intracochlear reflections of SFOAE energy between its region of generation and the middle ear boundary. However, the relationship between behavioral and SFOAE microstructures, as well as their presumed dependence on the properties of the SFOAE-generation mechanism, have yet to be adequately examined. To address this, behavioral thresholds and SFOAEs evoked by near-threshold tones were compared in 12 normal-hearing female subjects. The microstructures observed in thresholds and both SFOAE amplitudes and delays were found to be strikingly similar. SFOAE phase accumulated an integer number of cycles between the frequencies of microstructure maxima, consistent with a dependence of microstructure periodicity on SFOAE propagation delays. Additionally, microstructure depth was correlated with SFOAE magnitude in a manner resembling that predicted by the intracochlear reflection framework, after assuming reasonable values of parameters related to middle ear transmission. Further exploration of this framework may yield more precise estimates of such parameters and provide insight into their frequency dependence.

  18. Treatment thresholds for osteoporosis and reimbursability criteria: perspectives associated with fracture risk-assessment tools.

    PubMed

    Adami, Silvano; Bertoldo, Francesco; Gatti, Davide; Minisola, Giovanni; Rossini, Maurizio; Sinigaglia, Luigi; Varenna, Massimo

    2013-09-01

    The definition of osteoporosis was based for several years on bone mineral density values, which were used by most guidelines for defining treatment thresholds. The availability of tools for the estimation of fracture risk, such as FRAX™ or its adapted Italian version, DeFRA, is providing a way to grade osteoporosis severity. By applying these new tools, the criteria identified in Italy for treatment reimbursability (e.g., "Nota 79") are confirmed as extremely conservative. The new fracture risk-assessment tools provide continuous risk values that can be used by health authorities (or "payers") for identifying treatment thresholds. FRAX estimates the risk for "major osteoporotic fractures," which are not counted in registered fracture trials. Here, we elaborate an algorithm to convert vertebral and nonvertebral fractures to the "major fractures" of FRAX, and this allows a cost-effectiveness assessment for each drug.

  19. Technology Thresholds for Microgravity: Status and Prospects

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1996-01-01

    The technological and economic thresholds for microgravity space research are estimated in materials science and biotechnology. In the 1990s, the improvement of materials processing has been identified as a national scientific priority, particularly for stimulating entrepreneurship. The substantial US investment at stake in these critical technologies includes six broad categories: aerospace, transportation, health care, information, energy, and the environment. Microgravity space research addresses key technologies in each area. The viability of selected space-related industries is critically evaluated and a market share philosophy is developed, namely that incremental improvements in a large markets efficiency is a tangible reward from space-based research.

  20. Stages in third molar development and eruption to estimate the 18-year threshold Malay juvenile.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Martens, Luc

    2015-10-01

    Age 18 years is considered as the age of majority by most countries. To ascertain the age of interest, both third molar development (TMD) and eruption (TME) staging scores are beneficial without needing multiple imaging modalities. This study aimed to assess the chronological course of TMD and TME in a Malay sub-adult population and evaluate predictions when specific stage(s) of TMD and TME have been attained that are pertinent to the age group of interest (<18 years or ≥18 years). A sample of 714 digital panoramic images for subjects stratified by age between 14.1 and 23.9 years was retrospectively collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage TMD and TME, respectively. A binary logistic regression was performed to predict the 18-year threshold with staging score as predictors. Stages 4-6 (TMD) and A-B (TME) for males and stages 4 (TMD) and A (TME) for females were found to discriminate the <18-year group. For both genders, stages 9-10 (TMD) and D (TME) can be used as reference stages to estimate whether a subject is likely to be ≥18 years, with 94.74-100% and 85.88-96.38% correct predictions, respectively. Stages 4 (TMD) and A (TME) can also be used to identify juveniles (<18 years) with a high degree of correct predictions, 100%. The juvenility of an individual is easily anticipated by using the specific staging scores of both third molar variables (TMD and TME) without complex calculations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. The effects of visual scenes on roll and pitch thresholds in pilots versus nonpilots.

    PubMed

    Otakeno, Shinji; Matthews, Roger S J; Folio, Les; Previc, Fred H; Lessard, Charles S

    2002-02-01

    Previous studies have indicated that, compared with nonpilots, pilots rely more on vision than "seat-of-the-pants" sensations when presented with visual-vestibular conflict. The objective of this study was to evaluate whether pilots and nonpilots differ in their thresholds for tilt perception while viewing visual scenes depicting simulated flight. This study was conducted in the Advanced Spatial Disorientation Demonstrator (ASDD) at Brooks AFB, TX. There were 14 subjects (7 pilots and 7 nonpilots) who recorded tilt detection thresholds in pitch and roll while exposed to sub-threshold movement in each axis. During each test run, subjects were presented with computer-generated visual scenes depicting accelerating forward flight by day or night, and a blank (control) condition. The only significant effect detected by an analysis of variance (ANOVA) was that all subjects were more sensitive to tilt in roll than in pitch [F (2,24) = 18.96, p < 0.001]. Overall, pilots had marginally higher tilt detection thresholds compared with nonpilots (p = 0.055), but the type of visual scene had no significant effect on thresholds. In this study, pilots did not demonstrate greater visual dominance over vestibular and proprioceptive cues than nonpilots, but appeared to have higher pitch and roll thresholds overall. The finding of significantly lower detection thresholds in the roll axis vs. the pitch axis was an incidental finding for both subject groups.

  2. Computational gestalts and perception thresholds.

    PubMed

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  3. Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters

    NASA Astrophysics Data System (ADS)

    Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.

    2016-06-01

    Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent

  4. Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters

    PubMed Central

    Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M

    2018-01-01

    Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and

  5. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  6. A robust threshold-based cloud mask for the HRV channel of MSG SEVIRI

    NASA Astrophysics Data System (ADS)

    Bley, S.; Deneke, H.

    2013-03-01

    A robust threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the METEOSAT SEVIRI instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures which cannot be detected by the low resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behaviour for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test dataset depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as estimate of cloud fraction.

  7. Foreign object detection and removal to improve automated analysis of chest radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The methodmore » is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.« less

  8. Relationship between pastoralists' evaluation of rangeland state and vegetation threshold changes in Mongolian rangelands.

    PubMed

    Kakinuma, Kaoru; Sasaki, Takehiro; Jamsran, Undarmaa; Okuro, Toshiya; Takeuchi, Kazuhiko

    2014-10-01

    Applying the threshold concept to rangeland management is an important challenge in semi-arid and arid regions. Threshold recognition and prediction is necessary to enable local pastoralists to prevent the occurrence of an undesirable state that would result from unsustainable grazing pressure, but this requires a better understanding of the pastoralists' perception of vegetation threshold changes. We estimated plant species cover in survey plots along grazing gradients in steppe and desert-steppe areas of Mongolia. We also conducted interviews with local pastoralists and asked them to evaluate whether the plots were suitable for grazing. Floristic composition changed nonlinearly along the grazing gradient in both the desert-steppe and steppe areas. Pastoralists observed the floristic composition changes along the grazing gradients, but their evaluations of grazing suitability did not always decrease along the grazing gradients, both of which included areas in a post-threshold state. These results indicated that local pastoralists and scientists may have different perceptions of vegetation states, even though both of groups used plant species and coverage as indicators in their evaluations. Therefore, in future studies of rangeland management, researchers and pastoralists should exchange their knowledge and perceptions to successfully apply the threshold concept to rangeland management.

  9. FUNDAMENTALS OF THRESHOLD LOGIC.

    DTIC Science & Technology

    These notes on threshold logic are intended as intermediary material between a completely geometric, heuristic presentation and the more formal...source material available in the literature. Basic definitions and simple properties of threshold function are developed, followed by a complete treatment

  10. Representation of Vegetation and Other Nonerodible Elements in Aeolian Shear Stress Partitioning Models for Predicting Transport Threshold

    NASA Technical Reports Server (NTRS)

    King, James; Nickling, William G.; Gillies, John A.

    2005-01-01

    The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.

  11. Radiation degradation prediction for InGaP solar cells by using appropriate estimation method for displacement threshold energy

    NASA Astrophysics Data System (ADS)

    Okuno, Y.; Okuda, S.; Akiyoshi, M.; Oka, T.; Harumoto, M.; Omura, K.; Kawakita, S.; Imaizumi, M.; Messenger, S. R.; Lee, K. H.; Yamaguchi, M.

    2017-09-01

    InGaP solar cells are not predicted to be susceptible to displacement damage by irradiation with electrons at energies lower than 100 keV from non-ionizing energy loss (NIEL) calculations. However, it is recently observed that InGaP solar cells are shown to degrade by irradiation with 60 keV electrons. This degradation is considered to be caused by radiation defects but is not clear. In this study, the kind of the defects generated by electrons at energies lower than 100 keV is found by deep-level transient spectroscopy (DLTS). The result of DLTS indicates that the prediction of primary knock-on atoms by using the radiation damage model is different from the experiment. In order to suggest the generation mechanism of radiation defects, we propose a new displacement threshold energy (Ed) by using a new technique in which NIEL and the introduction rate of radiation defects are combined. The degradation prediction by using estimated Ed is found to agree well with the degradation of electric power of InGaP solar cells irradiated by low-energy electrons. From the theory of radiation defects, we propose a new obtaining process of suitable degradation prediction by the displacement damage dose method.

  12. Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?

    PubMed Central

    Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei

    2015-01-01

    In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries. PMID:26314637

  13. Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?

    PubMed

    Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei

    2015-08-28

    In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries.

  14. [RS estimation of inventory parameters and carbon storage of moso bamboo forest based on synergistic use of object-based image analysis and decision tree].

    PubMed

    Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie

    2017-10-01

    By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.

  15. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.

  16. 2 CFR 200.88 - Simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Simplified acquisition threshold. 200.88... acquisition threshold. Simplified acquisition threshold means the dollar amount below which a non-Federal... threshold. The simplified acquisition threshold is set by the Federal Acquisition Regulation at 48 CFR...

  17. Reference guide to odor thresholds for hazardous air pollutants listed in the Clean Air Act amendments of 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.

    1992-03-01

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less

  18. Rainfall thresholds and susceptibility mapping for shallow landslides and debris flows in Scotland

    NASA Astrophysics Data System (ADS)

    Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil

    2017-04-01

    Shallow translational slides and debris flows (hereafter 'landslides') pose a significant threat to life and cause significant annual economic impacts (e.g. by damage and disruption of infrastructure). The focus of this research is on the definition of objective rainfall thresholds using a weather radar system and landslide susceptibility mapping. In the study area Scotland, an inventory of 75 known landslides was used for the period 2003 to 2016. First, the effect of using different rain records (i.e. time series length) on two threshold selection techniques in receiver operating characteristic (ROC) analysis was evaluated. The results show that thresholds selected by 'Threat Score' (minimising false alarms) are sensitive to rain record length and which is not routinely considered, whereas thresholds selected using 'Optimal Point' (minimising failed alarms) are not; therefore these may be suited to establishing lower limit thresholds and be of interest to those developing early warning systems. Robust thresholds are found for combinations of normalised rain duration and accumulation at 1 and 12 day's antecedence respectively; these are normalised using the rainy-day normal and an equivalent measure for rain intensity. This research indicates that, in Scotland, rain accumulation provides a better indicator than rain intensity and that landslides may be generated by threshold conditions lower than previously thought. Second, a landslide susceptibility map is constructed using a cross-validated logistic regression model. A novel element of the approach is that landslide susceptibility is calculated for individual hillslope sections. The developed thresholds and susceptibility map are combined to assess potential hazards and impacts posed to the national highway network in Scotland.

  19. Response, thermal regulatory threshold and thermal breakdown threshold of restrained RF-exposed mice at 905 MHz

    NASA Astrophysics Data System (ADS)

    Ebert, S.; Eom, S. J.; Schuderer, J.; Apostel, U.; Tillmann, T.; Dasenbrock, C.; Kuster, N.

    2005-11-01

    The objective of this study was the determination of the thermal regulatory and the thermal breakdown thresholds for in-tube restrained B6C3F1 and NMRI mice exposed to radiofrequency electromagnetic fields at 905 MHz. Different levels of the whole-body averaged specific absorption rate (SAR = 0, 2, 5, 7.2, 10, 12.6 and 20 W kg-1) have been applied to the mice inside the 'Ferris Wheel' exposure setup at 22 ± 2 °C and 30-70% humidity. The thermal responses were assessed by measurement of the rectal temperature prior, during and after the 2 h exposure session. For B6C3F1 mice, the thermal response was examined for three different weight groups (20 g, 24 g, 29 g), both genders and for pregnant mice. Additionally, NMRI mice with a weight of 36 g were investigated for an interstrain comparison. The thermal regulatory threshold of in-tube restrained mice was found at SAR levels between 2 W kg-1 and 5 W kg-1, whereas the breakdown of regulation was determined at 10.1 ± 4.0 W kg-1(K = 2) for B6C3F1 mice and 7.7 ± 1.6 W kg-1(K = 2) for NMRI mice. Based on a simplified power balance equation, the thresholds show a clear dependence upon the metabolic rate and weight. NMRI mice were more sensitive to thermal stress and respond at lower SAR values with regulation and breakdown. The presented data suggest that the thermal breakdown for in-tube restrained mice, whole-body exposed to radiofrequency fields, may occur at SAR levels of 6 W kg-1(K = 2) at laboratory conditions.

  20. Rationality, practice variation and person-centred health policy: a threshold hypothesis.

    PubMed

    Djulbegovic, Benjamin; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Van den Ende, Jef

    2015-12-01

    Variation in practice of medicine is one of the major health policy issues of today. Ultimately, it is related to physicians' decision making. Similar patients with similar likelihood of having disease are often managed by different doctors differently: some doctors may elect to observe the patient, others decide to act based on diagnostic testing and yet others may elect to treat without testing. We explain these differences in practice by differences in disease probability thresholds at which physicians decide to act: contextual social and clinical factors and emotions such as regret affect the threshold by influencing the way doctors integrate objective data related to treatment and testing. However, depending on a theoretical construct each of the physician's behaviour can be considered rational. In fact, we showed that the current regulatory policies lead to predictably low thresholds for most decisions in contemporary practice. As a result, we may expect continuing motivation for overuse of treatment and diagnostic tests. We argue that rationality should take into account both formal principles of rationality and human intuitions about good decisions along the lines of Rawls' 'reflective equilibrium/considered judgment'. In turn, this can help define a threshold model that is empirically testable. © 2015 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.

  1. Rationality, practice variation and person‐centred health policy: a threshold hypothesis

    PubMed Central

    Hamm, Robert M.; Mayrhofer, Thomas; Hozo, Iztok; Van den Ende, Jef

    2015-01-01

    Abstract Variation in practice of medicine is one of the major health policy issues of today. Ultimately, it is related to physicians' decision making. Similar patients with similar likelihood of having disease are often managed by different doctors differently: some doctors may elect to observe the patient, others decide to act based on diagnostic testing and yet others may elect to treat without testing. We explain these differences in practice by differences in disease probability thresholds at which physicians decide to act: contextual social and clinical factors and emotions such as regret affect the threshold by influencing the way doctors integrate objective data related to treatment and testing. However, depending on a theoretical construct each of the physician's behaviour can be considered rational. In fact, we showed that the current regulatory policies lead to predictably low thresholds for most decisions in contemporary practice. As a result, we may expect continuing motivation for overuse of treatment and diagnostic tests. We argue that rationality should take into account both formal principles of rationality and human intuitions about good decisions along the lines of Rawls' ‘reflective equilibrium/considered judgment’. In turn, this can help define a threshold model that is empirically testable. PMID:26639018

  2. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  3. A Systematic Review of Studies Eliciting Willingness-to-Pay per Quality-Adjusted Life Year: Does It Justify CE Threshold?

    PubMed Central

    Nimdet, Khachapon; Chaiyakunapruk, Nathorn; Vichansavakul, Kittaya; Ngorsuraches, Surachat

    2015-01-01

    Background A number of studies have been conducted to estimate willingness to pay (WTP) per quality-adjusted life years (QALY) in patients or general population for various diseases. However, there has not been any systematic review summarizing the relationship between WTP per QALY and cost-effectiveness (CE) threshold based on World Health Organization (WHO) recommendation. Objective To systematically review willingness-to-pay per quality-adjusted-life-year (WTP per QALY) literature, to compare WTP per QALY with Cost-effectiveness (CE) threshold recommended by WHO, and to determine potential influencing factors. Methods We searched MEDLINE, EMBASE, Psyinfo, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Center of Research Dissemination (CRD), and EconLit from inception through 15 July 2014. To be included, studies have to estimate WTP per QALY in health-related issues using stated preference method. Two investigators independently reviewed each abstract, completed full-text reviews, and extracted information for included studies. We compared WTP per QALY to GDP per capita, analyzed, and summarized potential influencing factors. Results Out of 3,914 articles founded, 14 studies were included. Most studies (92.85%) used contingent valuation method, while only one study used discrete choice experiments. Sample size varied from 104 to 21,896 persons. The ratio between WTP per QALY and GDP per capita varied widely from 0.05 to 5.40, depending on scenario outcomes (e.g., whether it extended/saved life or improved quality of life), severity of hypothetical scenarios, duration of scenario, and source of funding. The average ratio of WTP per QALY and GDP per capita for extending life or saving life (2.03) was significantly higher than the average for improving quality of life (0.59) with the mean difference of 1.43 (95% CI, 1.81 to 1.06). Conclusion This systematic review provides an overview summary of all studies estimating WTP per QALY studies. The

  4. Canal–Otolith Interactions and Detection Thresholds of Linear and Angular Components During Curved-Path Self-Motion

    PubMed Central

    MacNeilage, Paul R.; Turner, Amanda H.

    2010-01-01

    Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843

  5. The influence of thresholds on the risk assessment of carcinogens in food.

    PubMed

    Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian

    2009-08-01

    The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer

  6. A Ratiometric Threshold for Determining Presence of Cancer During Fluorescence-guided Surgery

    PubMed Central

    Warram, Jason M; de Boer, Esther; Moore, Lindsay S.; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L

    2015-01-01

    Background&Objective Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Methods Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n=11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. Results During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Conclusion Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. PMID:26074273

  7. Double Photoionization Near Threshold

    NASA Technical Reports Server (NTRS)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  8. Rejection Thresholds in Chocolate Milk: Evidence for Segmentation

    PubMed Central

    Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.

    2012-01-01

    Bitterness is generally considered a negative attribute in food, yet many individuals enjoy some bitterness in products like coffee or chocolate. In chocolate, bitterness arises from naturally occurring alkaloids and phenolics found in cacao. Fermentation and roasting help develop typical chocolate flavor and reduce the intense bitterness of raw cacao by modifying these bitter compounds. As it becomes increasingly common to fortify chocolate with `raw' cacao to increase the amount of healthful phytonutrients, it is important to identify the point at which the concentration of bitter compounds becomes objectionable, even to those who enjoy some bitterness. Classical threshold methods focus on the presence or absence of a sensation rather than acceptability or hedonics. A new alternative, the rejection threshold, was recently described in the literature. Here, we sought to quantify and compare differences in Rejection Thresholds (RjT) and Detection Thresholds (DT) in chocolate milk spiked with a food safe bitterant (sucrose octaacetate). In experiment 1, a series of paired preference tests was used to estimate the RjT for bitterness in chocolate milk. In a new group of participants (experiment 2), we determined the RjT and DT using the forced choice ascending method of limits. In both studies, participants were segmented on the basis of self-declared preference for milk or dark solid chocolate. Based on sigmoid fits of the indifference-preference function, the RjT was ~2.3 times higher for those preferring dark chocolate than the RjT for those preferring milk chocolate in both experiments. In contrast, the DT for both groups was functionally identical, suggesting that differential effects of bitterness on liking of chocolate products are not based on the ability to detect bitterness in these products. PMID:22754143

  9. Rejection Thresholds in Chocolate Milk: Evidence for Segmentation.

    PubMed

    Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E

    2012-10-01

    Bitterness is generally considered a negative attribute in food, yet many individuals enjoy some bitterness in products like coffee or chocolate. In chocolate, bitterness arises from naturally occurring alkaloids and phenolics found in cacao. Fermentation and roasting help develop typical chocolate flavor and reduce the intense bitterness of raw cacao by modifying these bitter compounds. As it becomes increasingly common to fortify chocolate with `raw' cacao to increase the amount of healthful phytonutrients, it is important to identify the point at which the concentration of bitter compounds becomes objectionable, even to those who enjoy some bitterness. Classical threshold methods focus on the presence or absence of a sensation rather than acceptability or hedonics. A new alternative, the rejection threshold, was recently described in the literature. Here, we sought to quantify and compare differences in Rejection Thresholds (RjT) and Detection Thresholds (DT) in chocolate milk spiked with a food safe bitterant (sucrose octaacetate). In experiment 1, a series of paired preference tests was used to estimate the RjT for bitterness in chocolate milk. In a new group of participants (experiment 2), we determined the RjT and DT using the forced choice ascending method of limits. In both studies, participants were segmented on the basis of self-declared preference for milk or dark solid chocolate. Based on sigmoid fits of the indifference-preference function, the RjT was ~2.3 times higher for those preferring dark chocolate than the RjT for those preferring milk chocolate in both experiments. In contrast, the DT for both groups was functionally identical, suggesting that differential effects of bitterness on liking of chocolate products are not based on the ability to detect bitterness in these products.

  10. How does precipitation become runoff? Comparison of hydrologic thresholds across hillslope and catchment scales

    NASA Astrophysics Data System (ADS)

    Ross, C.; Ali, G.; Oswald, C. J.; McMillan, H. K.; Walter, K.

    2017-12-01

    A hydrologic threshold is a critical point in time when runoff behavior rapidly changes, often in response to the activation of specific storage-driven or intensity-driven processes. Hydrologic thresholds can be viewed as characteristic signatures of hydrosystems, which makes them useful for site comparison as long as their presence (or lack thereof) can be evaluated in a standard manner across a range of environments. While several previous studies have successfully identified thresholds at a variety of individual sites, only a limited number have compared dynamics prevailing at the hillslope versus catchment scale, or distinguished the role of storage versus intensity thresholds. The objective of this study was therefore to examine precipitation input thresholds as well as "precipitation minus evapotranspiration" thresholds in environments with contrasted climatic and geographic characteristics. Historical climate and hydrometric datasets were consolidated for one hillslope site located at the Panola Mountain Research Watershed (Southeastern USA) and catchments located in the HJ Andrew's Experimental Forest (Northwestern USA), the Catfish Creek Watershed (Canadian prairies), the Experimental Lakes Area (Canadian boreal ecozone), the Tarrawarra catchment (Australia) and the Mahurangi catchment (New Zealand). Individual precipitation-runoff events were delineated using the newly introduced software HydRun to derive event-specific hydrograph parameters as well surrogate measures of antecedent moisture conditions and evapotranspiration in an automated and consistent manner. Various hydrograph parameters were then plotted against those surrogate measures to detect and evaluate site-specific threshold dynamics. Preliminary results show that a range of threshold shapes (e.g., "hockey stick", heaviside and dirac) were observed across sites. The influence of antecedent precipitation on threshold magnitude and shape also appeared stronger at sites with lower topographic

  11. Estimating health benefits and cost-savings for achieving the Healthy People 2020 objective of reducing invasive colorectal cancer.

    PubMed

    Hung, Mei-Chuan; Ekwueme, Donatus U; White, Arica; Rim, Sun Hee; King, Jessica B; Wang, Jung-Der; Chang, Su-Hsin

    2018-01-01

    This study aims to quantify the aggregate potential life-years (LYs) saved and healthcare cost-savings if the Healthy People 2020 objective were met to reduce invasive colorectal cancer (CRC) incidence by 15%. We identified patients (n=886,380) diagnosed with invasive CRC between 2001 and 2011 from a nationally representative cancer dataset. We stratified these patients by sex, race/ethnicity, and age. Using these data and data from the 2001-2011 U.S. life tables, we estimated a survival function for each CRC group and the corresponding reference group and computed per-person LYs saved. We estimated per-person annual healthcare cost-savings using the 2008-2012 Medical Expenditure Panel Survey. We calculated aggregate LYs saved and cost-savings by multiplying the reduced number of CRC patients by the per-person LYs saved and lifetime healthcare cost-savings, respectively. We estimated an aggregate of 84,569 and 64,924 LYs saved for men and women, respectively, accounting for healthcare cost-savings of $329.3 and $294.2 million (in 2013$), respectively. Per person, we estimated 6.3 potential LYs saved related to those who developed CRC for both men and women, and healthcare cost-savings of $24,000 for men and $28,000 for women. Non-Hispanic whites and those aged 60-64 had the highest aggregate potential LYs saved and cost-savings. Achieving the HP2020 objective of reducing invasive CRC incidence by 15% by year 2020 would potentially save nearly 150,000 life-years and $624 million on healthcare costs. Copyright © 2017. Published by Elsevier Inc.

  12. Double-adjustment in propensity score matching analysis: choosing a threshold for considering residual imbalance.

    PubMed

    Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick

    2017-04-28

    Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.

  13. Cost-effectiveness of the faecal immunochemical test at a range of positivity thresholds compared with the guaiac faecal occult blood test in the NHS Bowel Cancer Screening Programme in England

    PubMed Central

    Halloran, Stephen

    2017-01-01

    Objectives Through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP), men and women in England aged between 60 and 74 years are invited for colorectal cancer (CRC) screening every 2 years using the guaiac faecal occult blood test (gFOBT). The aim of this analysis was to estimate the cost–utility of the faecal immunochemical test for haemoglobin (FIT) compared with gFOBT for a cohort beginning screening aged 60 years at a range of FIT positivity thresholds. Design We constructed a cohort-based Markov state transition model of CRC disease progression and screening. Screening uptake, detection, adverse event, mortality and cost data were taken from BCSP data and national sources, including a recent large pilot study of FIT screening in the BCSP. Results Our results suggest that FIT is cost-effective compared with gFOBT at all thresholds, resulting in cost savings and quality-adjusted life years (QALYs) gained over a lifetime time horizon. FIT was cost-saving (p<0.001) and resulted in QALY gains of 0.014 (95% CI 0.012 to 0.017) at the base case threshold of 180 µg Hb/g faeces. Greater health gains and cost savings were achieved as the FIT threshold was decreased due to savings in cancer management costs. However, at lower thresholds, FIT was also associated with more colonoscopies (increasing from 32 additional colonoscopies per 1000 people invited for screening for FIT 180 µg Hb/g faeces to 421 additional colonoscopies per 1000 people invited for screening for FIT 20 µg Hb/g faeces over a 40-year time horizon). Parameter uncertainty had limited impact on the conclusions. Conclusions This is the first published economic analysis of FIT screening in England using data directly comparing FIT with gFOBT in the NHS BSCP. These results for a cohort starting screening aged 60 years suggest that FIT is highly cost-effective at all thresholds considered. Further modelling is needed to estimate economic outcomes for screening across all age

  14. An approach to derive groundwater and stream threshold values for total nitrogen and ensure good ecological status of associated aquatic ecosystems - example from a coastal catchment to a vulnerable Danish estuary.

    NASA Astrophysics Data System (ADS)

    Hinsby, Klaus; Markager, Stiig; Kronvang, Brian; Windolf, Jørgen; Sonnenborg, Torben; Sørensen, Lærke

    2015-04-01

    Nitrate, which typically makes up the major part (~>90%) of dissolved inorganic nitrogen in groundwater and surface water, is the most frequent pollutant responsible for European groundwater bodies failing to meet the good status objectives of the European Water Framework Directive generally when comparing groundwater monitoring data with the nitrate quality standard of the Groundwater Directive (50 mg/l = the WHO drinking water standard). Still, while more than 50 % of the European surface water bodies do not meet the objective of good ecological status "only" 25 % of groundwater bodies do not meet the objective of good chemical status according to the river basin management plans reported by the EU member states. However, based on a study on interactions between groundwater, streams and a Danish estuary we argue that nitrate threshold values for aerobic groundwater often need to be significantly below the nitrate quality standard to ensure good ecological status of associated surface water bodies, and hence that the chemical status of European groundwater is worse than indicated by the present assessments. Here we suggest a methodology for derivation of groundwater and stream threshold values for total nitrogen ("nitrate") in a coastal catchment based on assessment of maximum acceptable nitrogen loadings (thresholds) to the associated vulnerable estuary. The applied method use existing information on agricultural practices and point source emissions in the catchment, groundwater, stream quantity and quality monitoring data that all feed data to an integrated groundwater and surface water modelling tool enabling us to conduct an assessment of total nitrogen loads and threshold concentrations derived to ensure/restore good ecological status of the investigated estuary. For the catchment to the Horsens estuary in Denmark we estimate the stream and groundwater thresholds for total nitrogen to be about 13 and 27 mg/l (~ 12 and 25 mg/l of nitrate). The shown example of

  15. Between-airport heterogeneity in air toxics emissions associated with individual cancer risk thresholds and population risks

    PubMed Central

    2009-01-01

    Background Airports represent a complex source type of increasing importance contributing to air toxics risks. Comprehensive atmospheric dispersion models are beyond the scope of many applications, so it would be valuable to rapidly but accurately characterize the risk-relevant exposure implications of emissions at an airport. Methods In this study, we apply a high resolution atmospheric dispersion model (AERMOD) to 32 airports across the United States, focusing on benzene, 1,3-butadiene, and benzo [a]pyrene. We estimate the emission rates required at these airports to exceed a 10-6 lifetime cancer risk for the maximally exposed individual (emission thresholds) and estimate the total population risk at these emission rates. Results The emission thresholds vary by two orders of magnitude across airports, with variability predicted by proximity of populations to the airport and mixing height (R2 = 0.74–0.75 across pollutants). At these emission thresholds, the population risk within 50 km of the airport varies by two orders of magnitude across airports, driven by substantial heterogeneity in total population exposure per unit emissions that is related to population density and uncorrelated with emission thresholds. Conclusion Our findings indicate that site characteristics can be used to accurately predict maximum individual risk and total population risk at a given level of emissions, but that optimizing on one endpoint will be non-optimal for the other. PMID:19426510

  16. NETWORK SYNTHESIS OF CASCADED THRESHOLD ELEMENTS.

    DTIC Science & Technology

    A threshold function is a switching function which can be stimulated by a single, simplified, idealized neuron, or threshold element. In this report... threshold functions are examined in the context of abstract set theory and linear algebra for the purpose of obtaining practical synthesis procedures...for networks of threshold elements. A procedure is described by which, for any given switching function, a cascade network of these elements can be

  17. Estimating the impact of adopting the revised United Kingdom acetaminophen treatment nomogram in the U.S. population.

    PubMed

    Levine, Michael; Stellpflug, Sam; Pizon, Anthony F; Traub, Stephen; Vohra, Rais; Wiegand, Timothy; Traub, Nicole; Tashman, David; Desai, Shoma; Chang, Jamie; Nathwani, Dhruv; Thomas, Stephen

    2017-07-01

    Acetaminophen toxicity is common in clinical practice. In recent years, several European countries have lowered the treatment threshold, which has resulted in increased number of patients being treated at a questionable clinical benefit. The primary objective of this study is to estimate the cost and associated burden to the United States (U.S.) healthcare system, if such a change were adopted in the U.S. This study is a retrospective review of all patients age 14 years or older who were admitted to one of eight different hospitals located throughout the U.S. with acetaminophen exposures during a five and a half year span, encompassing from 1 January 2008 to 30 June 2013. Those patients who would be treated with the revised nomogram, but not the current nomogram were included. The cost of such treatment was extrapolated to a national level. 139 subjects were identified who would be treated with the revised nomogram, but not the current nomogram. Extrapolating these numbers nationally, an additional 4507 (95%CI 3641-8751) Americans would be treated annually for acetaminophen toxicity. The cost of lowering the treatment threshold is estimated to be $45 million (95%CI 36,400,000-87,500,000) annually. Adopting the revised treatment threshold in the U.S. would result in a significant cost, yet provide an unclear clinical benefit.

  18. Implications of lower risk thresholds for statin treatment in primary prevention: analysis of CPRD and simulation modelling of annual cholesterol monitoring.

    PubMed

    McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael

    2015-01-01

    To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.

  19. Estimation of the tumor size at cure threshold among aggressive non-small cell lung cancers (NSCLCs): evidence from the surveillance, epidemiology, and end results (SEER) program and the national lung screening trial (NLST).

    PubMed

    Goldwasser, Deborah L

    2017-03-15

    The National Lung Screening Trial (NLST) demonstrated that non-small cell lung cancer (NSCLC) mortality can be reduced by a program of annual CT screening in high-risk individuals. However, CT screening regimens and adherence vary, potentially impacting the lung cancer mortality benefit. We defined the NSCLC cure threshold as the maximum tumor size at which a given NSCLC would be curable due to early detection. We obtained data from 518,234 NSCLCs documented in the U.S. SEER cancer registry between 1988 and 2012 and 1769 NSCLCs detected in the NLST. We demonstrated mathematically that the distribution function governing the cure threshold for the most aggressive NSCLCs, G(x|Φ = 1), was embedded in the probability function governing detection of SEER-documented NSCLCs. We determined the resulting probability functions governing detection over a range of G(x|Φ = 1) scenarios and compared them with their expected functional forms. We constructed a simulation framework to determine the cure threshold models most consistent with tumor sizes and outcomes documented in SEER and the NLST. Whereas the median tumor size for lethal NSCLCs documented in SEER is 43 mm (males) and 40 mm (females), a simulation model in which the median cure threshold for the most aggressive NSCLCs is 10 mm (males) and 15 mm (females) best fit the SEER and NLST data. The majority of NSCLCs in the NLST were treated at sizes greater than our median cure threshold estimates. New technology is needed to better distinguish and treat the most aggressive NSCLCs when they are small (i.e., 5-15 mm). © 2016 UICC.

  20. Determining lower threshold concentrations for synergistic effects.

    PubMed

    Bjergager, Maj-Britt Andersen; Dalhoff, Kristoffer; Kretschmann, Andreas; Nørgaard, Katrine Banke; Mayer, Philipp; Cedergreen, Nina

    2017-01-01

    Though only occurring rarely, synergistic interactions between chemicals in mixtures have long been a point of focus. Most studies analyzing synergistic interactions used unrealistically high chemical concentrations. The aim of the present study is to determine the threshold concentration below which proven synergists cease to act as synergists towards the aquatic crustacean Daphnia magna. To do this, we compared several approaches and test-setups to evaluate which approach gives the most conservative estimate for the lower threshold for synergy for three known azole synergists. We focus on synergistic interactions between the pyrethroid insecticide, alpha-cypermethrin, and one of the three azole fungicides prochloraz, propiconazole or epoxiconazole measured on Daphnia magna immobilization. Three different experimental setups were applied: A standard 48h acute toxicity test, an adapted 48h test using passive dosing for constant chemical exposure concentrations, and a 14-day test. Synergy was defined as occuring in mixtures where either EC 50 values decreased more than two-fold below what was predicted by concentration addition (horizontal assessment) or as mixtures where the fraction of immobile organisms increased more than two-fold above what was predicted by independent action (vertical assessment). All three tests confirmed the hypothesis of the existence of a lower azole threshold concentration below which no synergistic interaction was observed. The lower threshold concentration, however, decreased with increasing test duration from 0.026±0.013μM (9.794±4.897μgL -1 ), 0.425±0.089μM (145.435±30.46μgL -1 ) and 0.757±0.253μM (249.659±83.44μgL -1 ) for prochloraz, propiconazole and epoxiconazole in standard 48h toxicity tests to 0.015±0.004μM (5.651±1.507μgL -1 ), 0.145±0.025μM (49.619±8.555μgL -1 ) and 0.122±0.0417μM (40.236±13.75μgL -1 ), respectively, in the 14-days tests. Testing synergy in relation to concentration addition provided

  1. Effect of Age and Severity of Facial Palsy on Taste Thresholds in Bell's Palsy Patients

    PubMed Central

    Park, Jung Min; Kim, Myung Gu; Jung, Junyang; Kim, Sung Su; Jung, A Ra; Kim, Sang Hoon

    2017-01-01

    Background and Objectives To investigate whether taste thresholds, as determined by electrogustometry (EGM) and chemical taste tests, differ by age and the severity of facial palsy in patients with Bell's palsy. Subjects and Methods This study included 29 patients diagnosed with Bell's palsy between January 2014 and May 2015 in our hospital. Patients were assorted into age groups and by severity of facial palsy, as determined by House-Brackmann Scale, and their taste thresholds were assessed by EGM and chemical taste tests. Results EGM showed that taste thresholds at four locations on the tongue and one location on the central soft palate, 1 cm from the palatine uvula, were significantly higher in Bell's palsy patients than in controls (p<0.05). In contrast, chemical taste tests showed no significant differences in taste thresholds between the two groups (p>0.05). The severity of facial palsy did not affect taste thresholds, as determined by both EGM and chemical taste tests (p>0.05). The overall mean electrical taste thresholds on EGM were higher in younger Bell's palsy patients than in healthy subjects, with the difference at the back-right area of the tongue differing significantly (p<0.05). In older individuals, however, no significant differences in taste thresholds were observed between Bell's palsy patients and healthy subjects (p>0.05). Conclusions Electrical taste thresholds were higher in Bell's palsy patients than in controls. These differences were observed in younger, but not in older, individuals. PMID:28417103

  2. Molecular Signaling Network Motifs Provide a Mechanistic Basis for Cellular Threshold Responses

    PubMed Central

    Bhattacharya, Sudin; Conolly, Rory B.; Clewell, Harvey J.; Kaminski, Norbert E.; Andersen, Melvin E.

    2014-01-01

    Background: Increasingly, there is a move toward using in vitro toxicity testing to assess human health risk due to chemical exposure. As with in vivo toxicity testing, an important question for in vitro results is whether there are thresholds for adverse cellular responses. Empirical evaluations may show consistency with thresholds, but the main evidence has to come from mechanistic considerations. Objectives: Cellular response behaviors depend on the molecular pathway and circuitry in the cell and the manner in which chemicals perturb these circuits. Understanding circuit structures that are inherently capable of resisting small perturbations and producing threshold responses is an important step towards mechanistically interpreting in vitro testing data. Methods: Here we have examined dose–response characteristics for several biochemical network motifs. These network motifs are basic building blocks of molecular circuits underpinning a variety of cellular functions, including adaptation, homeostasis, proliferation, differentiation, and apoptosis. For each motif, we present biological examples and models to illustrate how thresholds arise from specific network structures. Discussion and Conclusion: Integral feedback, feedforward, and transcritical bifurcation motifs can generate thresholds. Other motifs (e.g., proportional feedback and ultrasensitivity)produce responses where the slope in the low-dose region is small and stays close to the baseline. Feedforward control may lead to nonmonotonic or hormetic responses. We conclude that network motifs provide a basis for understanding thresholds for cellular responses. Computational pathway modeling of these motifs and their combinations occurring in molecular signaling networks will be a key element in new risk assessment approaches based on in vitro cellular assays. Citation: Zhang Q, Bhattacharya S, Conolly RB, Clewell HJ III, Kaminski NE, Andersen ME. 2014. Molecular signaling network motifs provide a

  3. Pausing at the Threshold

    ERIC Educational Resources Information Center

    Morgan, Patrick K.

    2015-01-01

    Since about 2003, the notion of threshold concepts--the central ideas in any field that change how learners think about other ideas--have become difficult to escape at library conferences and in general information literacy discourse. Their visibility will likely only increase because threshold concepts figure prominently in the Framework for…

  4. Neurology objective structured clinical examination reliability using generalizability theory.

    PubMed

    Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R

    2015-11-03

    This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.

  5. On thermal corrections to near-threshold annihilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Laine, M., E-mail: skim@sejong.ac.kr, E-mail: laine@itp.unibe.ch

    2017-01-01

    We consider non-relativistic ''dark'' particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated annihilating TeV scale particles. Weakly bound states are shown to 'melt' below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the annihilation ratemore » by a factor 4 ... 80 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.« less

  6. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    PubMed Central

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.; Roland, Per E.

    2016-01-01

    Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes. PMID:27582693

  7. Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.

    PubMed

    Louthain, James A; Schmidt, Jason D

    2010-04-26

    Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.

  8. Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size

    NASA Technical Reports Server (NTRS)

    Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.

    2008-01-01

    Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.

  9. The hearing threshold of a harbor porpoise (Phocoena phocoena) for impulsive sounds (L).

    PubMed

    Kastelein, Ronald A; Gransier, Robin; Hoek, Lean; de Jong, Christ A F

    2012-08-01

    The distance at which harbor porpoises can hear underwater detonation sounds is unknown, but depends, among other factors, on the hearing threshold of the species for impulsive sounds. Therefore, the underwater hearing threshold of a young harbor porpoise for an impulsive sound, designed to mimic a detonation pulse, was quantified by using a psychophysical technique. The synthetic exponential pulse with a 5 ms time constant was produced and transmitted by an underwater projector in a pool. The resulting underwater sound, though modified by the response of the projection system and by the pool, exhibited the characteristic features of detonation sounds: A zero to peak sound pressure level of at least 30 dB (re 1 s(-1)) higher than the sound exposure level, and a short duration (34 ms). The animal's 50% detection threshold for this impulsive sound occurred at a received unweighted broadband sound exposure level of 60 dB re 1 μPa(2)s. It is shown that the porpoise's audiogram for short-duration tonal signals [Kastelein et al., J. Acoust. Soc. Am. 128, 3211-3222 (2010)] can be used to estimate its hearing threshold for impulsive sounds.

  10. [Correlation of perceptive temperature threshold of oral mucosa and sympathetic skin response].

    PubMed

    Wang, Z G; Dong, T Z; Li, J; Chen, G

    2018-02-09

    Objectives: To explore the critical values of temperature perception in various mucosa sites of oral cavity and to draw the perceptive temperature threshold maps in healthy volunteers. To observe the interrelationship between subjective cognitive perception and sympathetic skin response (SSR) under various levels of thermal stimuli. Methods: Forty-two healthy volunteers (recruited from the students of Tianjin Medical University, 16 females and 26 males) were enrolled in the present study. The whole oral mucosa of each subject was divided into multiple partitions according to the mucosa type as well as tooth position. Peltier patch (commodity name) semiconductor chip was placed in the central part of each subarea of the mucosa. The stimulus was increased or decreased at 1 ℃ each time from a baseline temperature of 37 ℃. Warm (WT) and cold (CT) perception thresholds were measured thereafter respectively. A topographic temperature map of the oral mucosa for each subject was drew. Furthermore, the SSR was elicited and recorded at three temperature levels of 50 ℃, 55 ℃, 60 ℃ respectively. Analog test with visual analogue scale (VAS) and McGill scales were also performed. Data were statistically analyzed with variance and generalized estimation equation. Results: The tip of the tongue was the most sensitive area with both WT [(38.8±2.1) ℃, P< 0.05] and CT [(23.5±4.2) ℃, P< 0.05]. The highest heat threshold of gingival mucosa was in the left lower posterior teeth area [(49.9±3.7) ℃, P< 0.05], and the highest cold threshold of gingival mucosa was in the left upper posterior teeth area [(15.9±5.5) ℃, P< 0.05]. The perceptive temperature threshold increased gradually from the midline to both left and right sides were observed symmetrically and bilaterally. There was no statistically significant differences in temperature perception threshold between males and females [WT, male (44.8±3.1) ℃, female (44.8±3.2) ℃, OR= 1.100, P= 0.930; CT, Male (18.4

  11. Is the Functional Threshold Power (FTP) a Valid Surrogate of the Lactate Threshold?

    PubMed

    Valenzuela, Pedro L; Morales, Javier S; Foster, Carl; Lucia, Alejandro; de la Villa, Pedro

    2018-05-10

    This study aimed to analyze the relationship between the Functional Threshold Power (FTP) and the Lactate Threshold (LT). 20 male cyclists performed an incremental test in which the LT was determined. At least 48 h later, they performed a 20-minute time trial and 95% of the mean power output (P20) was defined as FTP. Participants were divided into recreational (Peak Power Output [PPO] < 4.5 W∙kg -1 , n=11) or trained cyclists (PPO > 4.5 W∙kg -1 , n=9) according to their fitness status. The FTP (240 ± 35 W) was overall not significantly different (effect size[ES]=0.20, limits of agreement [LoA]=-2.4 ± 11.5%) from the LT (246 ± 24 W), and both markers were strongly correlated (r=0.95, p<0.0001). Accounting for the participants' fitness status, no significant differences were found between FTP and LT ([ES]=0.22; LoA=2.1 ± 7.8%) in TC, but FTP was significantly lower than the LT (p=0.0004, ES=0.81; LoA=-6.5 ± 8.3%) in RC. A significant relationship was found between relative PPO and the bias between FTP and the LT markers (r=0.77, p<0.0001). The FTP is a valid field test-based marker for the assessment of endurance fitness. However, caution should be taken when using the FTP interchangeably with the LT as the bias between markers seems to depend on the athletes' fitness status. Whereas the FTP provides a good estimate of the LT in trained cyclists, in recreational cyclists FTP may underestimate LT.

  12. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  13. LOGIC OF CONTROLLED THRESHOLD DEVICES.

    DTIC Science & Technology

    The synthesis of threshold logic circuits from several points of view is presented. The first approach is applicable to resistor-transistor networks...in which the outputs are tied to a common collector resistor. In general, fewer threshold logic gates than NOR gates connected to a common collector...network to realize a specified function such that the failure of any but the output gate can be compensated for by a change in the threshold level (and

  14. THRESHOLD LOGIC SYNTHESIS OF SEQUENTIAL MACHINES.

    DTIC Science & Technology

    The application of threshold logic to the design of sequential machines is the subject of this research. A single layer of threshold logic units in...advantages of fewer components because of the use of threshold logic, along with very high-speed operation resulting from the use of only a single layer of...logic. In some instances, namely for asynchronous machines, the only delay need be the natural delay of the single layer of threshold elements. It is

  15. Threshold Concepts in Economics

    ERIC Educational Resources Information Center

    Shanahan, Martin

    2016-01-01

    Purpose: The purpose of this paper is to examine threshold concepts in the context of teaching and learning first-year university economics. It outlines some of the arguments for using threshold concepts and provides examples using opportunity cost as an exemplar in economics. Design/ Methodology/Approach: The paper provides an overview of the…

  16. Object-based landslide detection in different geographic regions

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Hölbling, Daniel; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    Landslides occur in almost all mountainous regions of the world and rank among the most severe natural hazards. In the last decade - according to the world disaster report 2014 published by the International Federation of Red Cross and Red Crescent Societies (IRFC) - more than 9.000 people were killed by mass movements, more than 3.2 million people were affected and the total amount of disaster estimated damage accounts to more than 1.700 million US dollars. The application of remote sensing data for mapping landslides can contribute to post-disaster reconstruction or hazard mitigation, either by providing rapid information about the spatial distribution and location of landslides in the aftermath of triggering events or by creating and updating landslide inventories. This is especially valid for remote and inaccessible areas, where information on landslides is often lacking. However, reliable methods are needed for extracting timely and relevant information about landslides from remote sensing data. In recent years, novel methods such as object-based image analysis (OBIA) have been successfully employed for semi-automated landslide mapping. Several studies revealed that OBIA frequently outperforms pixel-based approaches, as a range of image object properties (spectral, spatial, morphometric, contextual) can be exploited during the analysis. However, object-based methods are often tailored to specific study areas, and thus, the transferability to regions with different geological settings, is often limited. The present case study evaluates the transferability and applicability of an OBIA approach for landslide detection in two distinct regions, i.e. the island of Taiwan and Austria. In Taiwan, sub-areas in the Baichi catchment in the North and in the Huaguoshan catchment in the southern-central part of the island are selected; in Austria, landslide-affected sites in the Upper Salzach catchment in the federal state of Salzburg are investigated. For both regions

  17. Assessment of the Anticonvulsant Potency of Ursolic Acid in Seizure Threshold Tests in Mice.

    PubMed

    Nieoczym, Dorota; Socała, Katarzyna; Wlaź, Piotr

    2018-05-01

    Ursolic acid (UA) is a plant derived compound which is also a component of the standard human diet. It possesses a wide range of pharmacological properties, i.e., antioxidant, anti-inflammatory, antimicrobial and antitumor, which have been used in folk medicine for centuries. Moreover, influence of UA on central nervous system-related processes, i.e., pain, anxiety and depression, was proved in experimental studies. UA also revealed anticonvulsant properties in animal models of epilepsy and seizures. The aim of the present study was to investigate the influence of UA on seizure thresholds in three acute seizure models in mice, i.e., the 6 Hz-induced psychomotor seizure threshold test, the maximal electroshock threshold (MEST) test and the timed intravenous pentylenetetrazole (iv PTZ) infusion test. We also examined its effect on the muscular strength (assessed in the grip strength test) and motor coordination (estimated in the chimney test) in mice. UA at doses of 50 and 100 mg/kg significantly increased the seizure thresholds in the 6 Hz and MEST tests. The studied compound did not influence the seizure thresholds in the iv PTZ test. Moreover, UA did not affect the motor coordination and muscular strength in mice. UA displays only a weak anticonvulsant potential which is dependent on the used seizure model.

  18. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  19. Estimating soil moisture exceedance probability from antecedent rainfall

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.

    2016-12-01

    The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.

  20. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    PubMed

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L

    2013-01-01

    Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that