NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Detection of exudates in fundus imagery using a constant false-alarm rate (CFAR) detector
NASA Astrophysics Data System (ADS)
Khanna, Manish; Kapoor, Elina
2014-05-01
Diabetic retinopathy is the leading cause of blindness in adults in the United States. The presence of exudates in fundus imagery is the early sign of diabetic retinopathy so detection of these lesions is essential in preventing further ocular damage. In this paper we present a novel technique to automatically detect exudates in fundus imagery that is robust against spatial and temporal variations of background noise. The detection threshold is adjusted dynamically, based on the local noise statics around the pixel under test in order to maintain a pre-determined, constant false alarm rate (CFAR). The CFAR detector is often used to detect bright targets in radar imagery where the background clutter can vary considerably from scene to scene and with angle to the scene. Similarly, the CFAR detector addresses the challenge of detecting exudate lesions in RGB and multispectral fundus imagery where the background clutter often exhibits variations in brightness and texture. These variations present a challenge to common, global thresholding detection algorithms and other methods. Performance of the CFAR algorithm is tested against a publicly available, annotated, diabetic retinopathy database and preliminary testing suggests that performance of the CFAR detector proves to be superior to techniques such as Otsu thresholding.
Intelligent Use of CFAR Algorithms
1993-05-01
the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017
Ku-band radar threshold analysis
NASA Technical Reports Server (NTRS)
Weber, C. L.; Polydoros, A.
1979-01-01
The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.
Nonhomogeneity Detection in CFAR Reference Windows Using the Mean-to-Mean Ratio Test
2012-01-01
function of spatially sampled clutter level estimate, RCA Review, 1968, 29, (3), pp. 414-464 3. Morris, G., ‘Airborne Pulsed Doppler Radar Systems...Electronic Systems, 1980, 16, pp. 115-118. 7. Goldman , H., and Bar-David, I., Analysis and application of the excision CFAR detec- tor, IEE Proceedings...Radar, Sonar & Navigation, 1988, 135F, pp. 563-575. 8. Goldman , H., Performance of the excision CFAR detector in the presence of interferers, IEE
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety
Jiang, Wen; Huang, Yulin; Yang, Jianyu
2016-01-01
Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD) result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method. PMID:27399714
Automatic Threshold Detector Techniques
1976-07-15
Averaging CFAR in Non- Stationary Weibull Clutter, " L. Novak, (1974 IEEE Symposium on Information Theory ). 8. "The Weibull Distribution Applied to the... UGTS (K) ,Kml NPTS) 140 DO 153 K~lvNPT9 IF(SIGCSO(K) .LT.0. )SIOCSO(K).1 .E-50 IF(SIOWSO(K) .LT.0. )SIGWSQ(K)-1 .E-50 IF(SIONSG(K) .LT.O. )SIG3NSQCIO-1.E
Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hachiya, Hiroyuki
1998-05-01
In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.
On Adaptive Cell-Averaging CFAR (Constant False-Alarm Rate) Radar Signal Detection
1987-10-01
SIICILE COPY 4 F FInI Tedwill Rlmrt to October 197 00 C\\JT ON ADAPTIVE CELL-AVERA81NG CFAR I RADAR SIGNAL DETECTION Syracuse University Mourud krket...NY 13441-5700 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE (Include Security Classification) 61102F 2’ 05 J8 PD - ON ADAPTIVE CELL-AVERAGING CFAR RADAR... CFAR ). One approach to adaptive detection in nonstationary noise and clutter background is to compare the processed target signal to an adaptive
Performance of Distributed CFAR Processors in Pearson Distributed Clutter
NASA Astrophysics Data System (ADS)
Messali, Zoubeida; Soltani, Faouzi
2006-12-01
This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.
Combatting Inherent Vulnerabilities of CFAR Algorithms and a New Robust CFAR Design
1993-09-01
elements of any automatic radar system. Unfortunately, CFAR systems are inherently vulnerable to degradation caused by large clutter edges, multiple ...edges, multiple targets, and electronic countermeasures (ECM) environments. 20 Distribution, Availability of Abstract 21 Abstract Security...inherently vulnerable to degradation caused by large clutter edges, multiple targets and jamming environments. This thesis presents eight popular and studied
Knowledge-based tracking algorithm
NASA Astrophysics Data System (ADS)
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
The Censored Mean-Level Detector for Multiple Target Environments.
1984-03-01
rate ( CFAR ) detectors known as censored mean-level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or zell-averaged...detectors known as censored mean- level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or cell-averaged CFAR detector), is...CENSORED MEAN-LEVEL DETECTOR The censored mean-level detector ( CMLD ) is a generalization of the traditional mean-level detector (MLD) or cell-averaged CFAR
Calculation of Cumulative Distributions and Detection Probabilities in Communications and Optics.
1984-10-01
the CMLD . As an example of a particular result, Figure 8.1 shows the additional SNR required (often called the CFAR loss) for the MLD, CMLD , and OSD in...the background noise level is known. Notice that although the CFAR loss increases with INR for the MLD, the CMLD and OSD have a bounded loss as the INR...Radar Detectors (J. A. Ritcey) Mean-level detectors (MLD) are commonly used in radar to maintain a constant -*! false-alarm rate ( CFAR ) when the
Calculation of Cumulative Distributions and Detection Probabilities in Communications and Optics.
1986-03-31
result, Figure 3.1 shows the additional SNR required (often called the CFAR loss) for the MLD, CMLD , and OSD in a multiple target environment to...Notice that although the CFAR loss increases with INR for the MLD, the CMLD and OSD have a bounded loss as the INR + w. These results have been more...false-alarm rate ( CFAR ) when the background noise level is unknown. In Section 2 we described the application of saddlepoint integration techniques to
Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming
NASA Astrophysics Data System (ADS)
Pawlowski, P. R.; Polydoros, A.
A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Sequential CFAR detectors using a dead-zone limiter
NASA Astrophysics Data System (ADS)
Tantaratana, Sawasd
1990-09-01
The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.
Adaptive Digital Signature Design and Short-Data-Record Adaptive Filtering
2008-04-01
rate BPSK binary phase shift keying CA − CFAR cell averaging− constant false alarm rate CDMA code − division multiple − access CFAR constant false...Cotae, “Spreading sequence design for multiple cell synchronous DS-CDMA systems under total weighted squared correlation criterion,” EURASIP Journal...415-428, Mar. 2002. [6] P. Cotae, “Spreading sequence design for multiple cell synchronous DS-CDMA systems under total weighted squared correlation
2015-02-01
Right of Canada as represented by the Minister of National Defence, 2015 c© Sa Majesté la Reine (en droit du Canada), telle que représentée par le...References [1] Chiu, S. (2010), Moving target parameter estimation for RADARSAT-2 Moving Object Detection EXperiment (MODEX), International Journal of...of multiple sinusoids in noise, In Proceedings. (ICASSP ’01). 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5
NASA Astrophysics Data System (ADS)
Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun
2018-03-01
Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
A comparative study on methods of improving SCR for ship detection in SAR image
NASA Astrophysics Data System (ADS)
Lang, Haitao; Shi, Hongji; Tao, Yunhong; Ma, Li
2017-10-01
Knowledge about ship positions plays a critical role in a wide range of maritime applications. To improve the performance of ship detector in SAR image, an effective strategy is improving the signal-to-clutter ratio (SCR) before conducting detection. In this paper, we present a comparative study on methods of improving SCR, including power-law scaling (PLS), max-mean and max-median filter (MMF1 and MMF2), method of wavelet transform (TWT), traditional SPAN detector, reflection symmetric metric (RSM), scattering mechanism metric (SMM). The ability of SCR improvement to SAR image and ship detection performance associated with cell- averaging CFAR (CA-CFAR) of different methods are evaluated on two real SAR data.
NASA Astrophysics Data System (ADS)
Aiello, Martina; Gianinetto, Marco
2017-10-01
Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions
Huang, Shiqi; Huang, Wenzhun; Zhang, Ting
2016-01-01
The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935
A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.
Huang, Shiqi; Huang, Wenzhun; Zhang, Ting
2016-12-07
The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.
Constant False Alarm Rate (CFAR) Autotrend Evaluation Report
2011-12-01
represent a level of uncertainty in the performance analysis. The performance analysis produced the following Key Performance Indicators ( KPIs ) as...Identity KPI Key Performance Indicator MooN M-out-of-N MSPU Modernized Signal Processor Unit NFF No Fault Found PAT Parameter Allocation Table PD
The Tale of Three Campuses: A Case Study in Outdoor Campus Assessment
ERIC Educational Resources Information Center
Eckert, Erica L.
2013-01-01
In a study for APPA's Center for Facilities Research (CFaR), Cain and Reynolds (2006a; 2006b) linked the quality of campus facilities and the attractiveness of campus to college choice among their study's participants but also noted that facilities may not always be the primary motivation. Further, the physical campus environment can impact…
You, Hongjian
2018-01-01
Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach. PMID:29364194
An, Quanzhi; Pan, Zongxu; You, Hongjian
2018-01-24
Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.
NASA Astrophysics Data System (ADS)
He, G.; Xia, Z.; Chen, H.; Li, K.; Zhao, Z.; Guo, Y.; Feng, P.
2018-04-01
Real-time ship detection using synthetic aperture radar (SAR) plays a vital role in disaster emergency and marine security. Especially the high resolution and wide swath (HRWS) SAR images, provides the advantages of high resolution and wide swath synchronously, significantly promotes the wide area ocean surveillance performance. In this study, a novel method is developed for ship target detection by using the HRWS SAR images. Firstly, an adaptive sliding window is developed to propose the suspected ship target areas, based upon the analysis of SAR backscattering intensity images. Then, backscattering intensity and texture features extracted from the training samples of manually selected ship and non-ship slice images, are used to train a support vector machine (SVM) to classify the proposed ship slice images. The approach is verified by using the Sentinl1A data working in interferometric wide swath mode. The results demonstrate the improvement performance of the proposed method over the constant false alarm rate (CFAR) method, where the classification accuracy improved from 88.5 % to 96.4 % and the false alarm rate mitigated from 11.5 % to 3.6 % compared with CFAR respectively.
Introduction to Radar Signal and Data Processing: The Opportunity
2006-09-01
SpA) Director of Analysis of Integrated Systems Group Via Tiburtina Km. 12.400 00131 Rome ITALY e.mail: afarina@selex-si.com Key words: radar...signal processing, data processing, adaptivity, space-time adaptive processing, knowledge based systems , CFAR. 1. SUMMARY This paper introduces to...the lecture series dedicated to the knowledge-based radar signal and data processing. Knowledge-based expert system (KBS) is in the realm of
ILIR : SSC San Diego In-House Laboratory Independent Research 2001 Annual Report
2002-05-01
canine distemper virus (CDV) (a morbillivirus closely related to one infecting marine mammals) by intramuscular or intradermal inoculation with a...data.* 3. Sixt, N., A. Cardoso, A. Vallier, J. Fayolle, R. Buckland, T. F. Wild. 1998. “Canine Distemper Virus DNA Vaccination Induces Humoral and...Complementary Code Keying CCSK Cyclic Code Shift Keying CDMA Code Division Multiplexing CDV Canine Distemper Virus CFAR Constant False Alarm
Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area
NASA Astrophysics Data System (ADS)
Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.
2018-04-01
Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.
Discriminating Sea Spikes in Incoherent Radar Measurements of Sea Clutter
2008-03-01
het detecteren echter niet te verwachten dat bet gebruik van sea spikes te onderzoeken. Een van deze modellen zal leiden tot een Auteur (s) dergelijk...report I TNO-DV 2008 A067 6/33 Abbreviations CFAR Constant False-Alarm Rate CST Composite Surface Theory FFT Fast Fourier Transform PDF Probability Density...described by the composite surface theory (CST). This theory describes the sea surface as small Bragg-resonant capillary waves riding on top of
High Grazing Angle and High Resolution Sea Clutter: Correlation and Polarisation Analyses
2007-03-01
the azimuthal correlation. The correlation between the HH and VV sea clutter data is low. A CA-CFAR ( cell average constant false-alarm rate...to calculate the power spectra of correlation profiles. The frequency interval of the traditional Discrete Fourier Transform is NT1 Hz, where N and...sea spikes, the Entropy-Alpha decomposition of sea spikes is shown in Figure 30. The process first locates spikes using a cell -average constant false
Deep belief networks for false alarm rejection in forward-looking ground-penetrating radar
NASA Astrophysics Data System (ADS)
Becker, John; Havens, Timothy C.; Pinar, Anthony; Schulz, Timothy J.
2015-05-01
Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener's false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.
Howe, Chanelle J; Dulin-Keita, Akilah; Cole, Stephen R; Hogan, Joseph W; Lau, Bryan; Moore, Richard D; Mathews, W Christopher; Crane, Heidi M; Drozd, Daniel R; Geng, Elvin; Boswell, Stephen L; Napravnik, Sonia; Eron, Joseph J; Mugavero, Michael J
2018-02-01
Reducing racial/ethnic disparities in human immunodeficiency virus (HIV) disease is a high priority. Reductions in HIV racial/ethnic disparities can potentially be achieved by intervening on important intermediate factors. The potential population impact of intervening on intermediates can be evaluated using observational data when certain conditions are met. However, using standard stratification-based approaches commonly employed in the observational HIV literature to estimate the potential population impact in this setting may yield results that do not accurately estimate quantities of interest. Here we describe a useful conceptual and methodological framework for using observational data to appropriately evaluate the impact on HIV racial/ethnic disparities of interventions. This framework reframes relevant scientific questions in terms of a controlled direct effect and estimates a corresponding proportion eliminated. We review methods and conditions sufficient for accurate estimation within the proposed framework. We use the framework to analyze data on 2,329 participants in the CFAR [Centers for AIDS Research] Network of Integrated Clinical Systems (2008-2014) to evaluate the potential impact of universal prescription of and ≥95% adherence to antiretroviral therapy on racial disparities in HIV virological suppression. We encourage the use of the described framework to appropriately evaluate the potential impact of targeted interventions in addressing HIV racial/ethnic disparities using observational data. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.
Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth
2018-03-26
When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.
Is it valid to calculate the 3-kilohertz threshold by averaging 2 and 4 kilohertz?
Gurgel, Richard K; Popelka, Gerald R; Oghalai, John S; Blevins, Nikolas H; Chang, Kay W; Jackler, Robert K
2012-07-01
Many guidelines for reporting hearing results use the threshold at 3 kilohertz (kHz), a frequency not measured routinely. This study assessed the validity of estimating the missing 3-kHz threshold by averaging the measured thresholds at 2 and 4 kHz. The estimated threshold was compared to the measured threshold at 3 kHz individually and when used in the pure-tone average (PTA) of 0.5, 1, 2, and 3 kHz in audiometric data from 2170 patients. The difference between the estimated and measured thresholds for 3 kHz was within ± 5 dB in 72% of audiograms, ± 10 dB in 91%, and within ± 20 dB in 99% (correlation coefficient r = 0.965). The difference between the PTA threshold using the estimated threshold compared with using the measured threshold at 3 kHz was within ± 5 dB in 99% of audiograms (r = 0.997). The estimated threshold accurately approximates the measured threshold at 3 kHz, especially when incorporated into the PTA.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C
2002-08-01
To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.
Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly
2016-12-01
To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
Burr, Tom; Hamada, Michael S.; Howell, John; ...
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation
Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin
2012-01-01
Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721
Marine Targets Classification in PolInSAR Data
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Jingsong; Ren, Lin
2014-11-01
In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then, A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.
Marine Targets Classification in PolInSAR Data
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Jingsong; Ren, Lin
2014-11-01
In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then , A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
Electroconvulsive therapy stimulus titration: Not all it seems.
Rosenman, Stephen J
2018-05-01
To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.
Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Pang; Yu, Yue
2017-05-01
This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
Pfiffner, Flurin; Kompis, Martin; Stieger, Christof
2009-10-01
To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Estimating the epidemic threshold on networks by deterministic connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less
Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J
2018-03-28
To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Qu, Cheng; Wang, Lin-Yan; Jin, Wen-Tao; Tang, Yu-Ping; Jin, Yi; Shi, Xu-Qin; Shang, Li-Li; Shang, Er-Xin; Duan, Jin-Ao
2016-11-06
The flower of Carthamus tinctorius L. (Carthami Flos, safflower), important in traditional Chinese medicine (TCM), is known for treating blood stasis, coronary heart disease, hypertension, and cerebrovascular disease in clinical and experimental studies. It is widely accepted that hydroxysafflor yellow A (HSYA) and anhydrosafflor yellow B (ASYB) are the major bioactive components of many formulae comprised of safflower. In this study, selective knock-out of target components such as HSYA and ASYB by using preparative high performance liquid chromatography (prep-HPLC) followed by antiplatelet and anticoagulation activities evaluation was used to investigate the roles of bioactive ingredients in safflower series of herb pairs. The results showed that both HSYA and ASYB not only played a direct role in activating blood circulation, but also indirectly made a contribution to the total bioactivity of safflower series of herb pairs. The degree of contribution of HSYA in the safflower and its series herb pairs was as follows: Carthami Flos-Ginseng Radix et Rhizoma Rubra (CF-GR) > Carthami Flos-Sappan Lignum (CF-SL) > Carthami Flos-Angelicae Sinensis Radix (CF-AS) > Carthami Flos-Astragali Radix (CF-AR) > Carthami Flos-Angelicae Sinensis Radix (CF-AS) > Carthami Flos-Glycyrrhizae Radix et Rhizoma (CF-GL) > Carthami Flos-Salviae Miltiorrhizae Radix et Rhizoma (CF-SM) > Carthami Flos (CF), and the contribution degree of ASYB in the safflower and its series herb pairs: CF-GL > CF-PS > CF-AS > CF-SL > CF-SM > CF-AR > CF-GR > CF. So, this study provided a significant and effective approach to elucidate the contribution of different herbal components to the bioactivity of the herb pair, and clarification of the variation of herb-pair compatibilities. In addition, this study provides guidance for investigating the relationship between herbal compounds and the bioactivities of herb pairs. It also provides a scientific basis for reasonable clinical applications and new drug development on the basis of the safflower series of herb pairs.
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Generalised form of a power law threshold function for rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders
2010-05-01
The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Zhou, Ning
2017-03-01
The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.
Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark
2015-02-01
Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability that it is < £30,000 per QALY is 0.97. Additional 'structural' uncertainty suggests, on balance, that the central or best estimate is, if anything, likely to be an overestimate. The health effects of changes in expenditure are greater when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.
The impact of cochlear fine structure on hearing thresholds and DPOAE levels
NASA Astrophysics Data System (ADS)
Lee, Jungmee; Long, Glenis; Talmadge, Carrick L.
2004-05-01
Although otoacoustic emissions (OAE) are used as clinical and research tools, the correlation between OAE behavioral estimates of hearing status is not large. In normal-hearing individuals, the level of OAEs can vary as much as 30 dB when the frequency is changed less than 5%. These pseudoperiodic variations of OAE level with frequency are known as fine structure. Hearing thresholds measured with high-frequency resolution reveals a similar (up to 15 dB) fine structure. We examine the impact of OAE and threshold fine structures on the prediction of auditory thresholds from OAE levels. Distortion product otoacoustic emissions (DPOAEs) were measured with sweeping primary tones. Psychoacoustic detection thresholds were measured using pure tones, sweep tones, FM tones, and narrow-band noise. Sweep DPOAE and narrow-band threshold estimates provide estimates that are less influenced by cochlear fine structure and should lead to a higher correlation between OAE levels and psychoacoustic thresholds. [Research supported by PSC CUNY, NIDCD, National Institute on Disability and Rehabilitation Research in U.S. Department of Education, and The Ministry of Education in Korea.
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?
Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro
2016-01-01
Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Dental age estimation: the role of probability estimates at the 10 year threshold.
Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham
2014-08-01
The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Estimating phonation threshold pressure.
Fisher, K V; Swank, P R
1997-10-01
Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Influence of Spatial and Chromatic Noise on Luminance Discrimination.
Miquilini, Leticia; Walker, Natalie A; Odigie, Erika A; Guimarães, Diego Leite; Salomão, Railson Cruz; Lacerda, Eliza Maria Costa Brito; Cortes, Maria Izabel Tentes; de Lima Silveira, Luiz Carlos; Fitzgerald, Malinda E C; Ventura, Dora Fix; Souza, Givago Silva
2017-12-05
Pseudoisochromatic figures are designed to base discrimination of a chromatic target from a background solely on the chromatic differences. This is accomplished by the introduction of luminance and spatial noise thereby eliminating these two dimensions as cues. The inverse rationale could also be applied to luminance discrimination, if spatial and chromatic noise are used to mask those cues. In this current study estimate of luminance contrast thresholds were conducted using a novel stimulus, based on the use of chromatic and spatial noise to mask the use of these cues in a luminance discrimination task. This was accomplished by presenting stimuli composed of a mosaic of circles colored randomly. A Landolt-C target differed from the background only by the luminance. The luminance contrast thresholds were estimated for different chromatic noise saturation conditions and compared to luminance contrast thresholds estimated using the same target in a non-mosaic stimulus. Moreover, the influence of the chromatic content in the noise on the luminance contrast threshold was also investigated. Luminance contrast threshold was dependent on the chromaticity noise strength. It was 10-fold higher than thresholds estimated from non-mosaic stimulus, but they were independent of colour space location in which the noise was modulated. The present study introduces a new method to investigate luminance vision intended for both basic science and clinical applications.
Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.
Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew
2018-01-01
Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Finneran, James J; Houser, Dorian S
2006-05-01
Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.
Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R
2010-01-01
Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.
NASA Astrophysics Data System (ADS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-05-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.
NASA Technical Reports Server (NTRS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-01-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.
Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold
NASA Technical Reports Server (NTRS)
Srivastava, M. K.; Temkin, A.
1991-01-01
The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-09-11
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
A novel approach to estimation of the time to biomarker threshold: applications to HIV.
Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc
2016-11-01
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimation of Effect Thresholds for the Development of Water Quality Criteria
Biological and ecological effect thresholds can be used for determining safe levels of nontraditional stressors. The U.S. EPA Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (WQC) [36] uses a risk assessment approach to estimate effect thre...
48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.
Code of Federal Regulations, 2012 CFR
2012-10-01
... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...
Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar
2011-11-01
Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi
2018-06-01
Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.
Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.
2013-01-01
The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon
2017-01-01
[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765
Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok
2017-09-30
The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition
Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori
2016-10-01
To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10 -8 , the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were P sig =3.24 × 10 -8 (AFR), 9.26 × 10 -8 (EUR), 1.83 × 10 -7 (AMR), 1.61 × 10 -7 (EAS) and 9.46 × 10 -8 (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded P sig =3.25 × 10 -8 (ALL) and 4.20 × 10 -8 (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10 -8 ) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples.
Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.
2018-01-01
Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.
Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B
2018-01-01
Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.
Determination of Cost-Effectiveness Threshold for Health Care Interventions in Malaysia.
Lim, Yen Wei; Shafie, Asrul Akmal; Chua, Gin Nie; Ahmad Hassali, Mohammed Azmi
2017-09-01
One major challenge in prioritizing health care using cost-effectiveness (CE) information is when alternatives are more expensive but more effective than existing technology. In such a situation, an external criterion in the form of a CE threshold that reflects the willingness to pay (WTP) per quality-adjusted life-year is necessary. To determine a CE threshold for health care interventions in Malaysia. A cross-sectional, contingent valuation study was conducted using a stratified multistage cluster random sampling technique in four states in Malaysia. One thousand thirteen respondents were interviewed in person for their socioeconomic background, quality of life, and WTP for a hypothetical scenario. The CE thresholds established using the nonparametric Turnbull method ranged from MYR12,810 to MYR22,840 (~US $4,000-US $7,000), whereas those estimated with the parametric interval regression model were between MYR19,929 and MYR28,470 (~US $6,200-US $8,900). Key factors that affected the CE thresholds were education level, estimated monthly household income, and the description of health state scenarios. These findings suggest that there is no single WTP value for a quality-adjusted life-year. The CE threshold estimated for Malaysia was found to be lower than the threshold value recommended by the World Health Organization. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger
2018-05-10
Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
Marine Targets Detection in Pol-SAR Data
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Jingsong
2016-08-01
In this poster, we present a new method of marine target detection in Pol-SAR data. One band SAR image, like HH, VV or VH, can be used to find marine target using a Contant False Alarm Ratio (CFAR) algorithm. But some false detection may happen, as the sidelobe of antenna, Azimuth ambiguity, strong speckle noise and so on in the single band SAR image. Pol-SAR image can get more information of targets. After decomposition and false color composite, the sidelobe of antenna and Azimuth ambiguity could be deleted. So, the method presented include three steps, decomposion, false color composite and supervised classification. The result of Radarsat-2 SAR image test indicates a good accuracy. The detection results are compared with Automatic Indentify Sistem (AIS) data, the accuracy of right detection is above 95% and false detection ratio is below 5%.
Wen, Xuejiao; Qiu, Xiaolan; Han, Bing; Ding, Chibiao; Lei, Bin; Chen, Qi
2018-05-07
Range ambiguity is one of the factors which affect the SAR image quality. Alternately transmitting up and down chirp modulation pulses is one of the methods used to suppress the range ambiguity. However, the defocusing range ambiguous signal can still hold the stronger backscattering intensity than the mainlobe imaging area in some case, which has a severe impact on visual effects and subsequent applications. In this paper, a novel hybrid range ambiguity suppression method for up and down chirp modulation is proposed. The method can obtain the ambiguity area image and reduce the ambiguity signal power appropriately, by applying pulse compression using a contrary modulation rate and CFAR detecting method. The effectiveness and correctness of the approach is demonstrated by processing the archive images acquired by Chinese Gaofen-3 SAR sensor in full-polarization mode.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-01-01
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
Point estimation following two-stage adaptive threshold enrichment clinical trials.
Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel
2018-05-31
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
1997-01-01
One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.
Comparison between ABR with click and narrow band chirp stimuli in children.
Zirn, Stefan; Louza, Julia; Reiman, Viktor; Wittlinger, Natalie; Hempel, John-Martin; Schuster, Maria
2014-08-01
Click and chirp-evoked auditory brainstem responses (ABR) are applied for the estimation of hearing thresholds in children. The present study analyzes ABR thresholds across a large sample of children's ears obtained with both methods. The aim was to demonstrate the correlation between both methods using narrow band chirp and click stimuli. Click and chirp evoked ABRs were measured in 253 children aged from 0 to 18 years to determine their individual auditory threshold. The delay-compensated stimuli were narrow band CE chirps with either 2000 Hz or 4000 Hz center frequencies. Measurements were performed consecutively during natural sleep, and under sedation or general anesthesia. Threshold estimation was performed for each measurement by two experienced audiologists. Pearson-correlation analysis revealed highly significant correlations (r=0.94) between click and chirp derived thresholds for both 2 kHz and 4 kHz chirps. No considerable differences were observed either between different age ranges or gender. Comparing the thresholds estimated using ABR with click stimuli and chirp stimuli, only 0.8-2% for the 2000 Hz NB-chirp and 0.4-1.2% of the 4000 Hz NB-chirp measurements differed more than 15 dB for different degrees of hearing loss or normal hearing. The results suggest that either NB-chirp or click ABR is sufficient for threshold estimation. This holds for the chirp frequencies of 2000 Hz and 4000 Hz. The use of either click- or chirp-evoked ABR allows a reduction of recording time in young infants. Nevertheless, to cross-check the results of one of the methods, we recommend measurements with the other method as well. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Saxena, Udit; Allan, Chris; Allen, Prudence
2017-06-01
Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
NASA Astrophysics Data System (ADS)
Meneghini, Robert
1998-09-01
A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A
2017-03-01
Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Comparison of algorithms of testing for use in automated evaluation of sensation.
Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M
1990-10-01
Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.
ERIC Educational Resources Information Center
Zaitoun, Maha; Cumming, Steven; Purcell, Alison; O'Brien, Katie
2017-01-01
Purpose: This study assesses the impact of patient clinical history on audiologists' performance when interpreting auditory brainstem response (ABR) results. Method: Fourteen audiologists' accuracy in estimating hearing threshold for 16 infants through interpretation of ABR traces was compared on 2 occasions at least 5 months apart. On the 1st…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... estimated cost of the case exceeds the adjusted outlier threshold. We calculate the adjusted outlier... to 80 percent of the difference between the estimated cost of the case and the outlier threshold. In... Federal Prospective Payment Rates VI. Update to Payments for High-Cost Outliers under the IRF PPS A...
USDA-ARS?s Scientific Manuscript database
(Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
Shen, Yi
2015-01-01
Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722
Gas composition sensing using carbon nanotube arrays
NASA Technical Reports Server (NTRS)
Li, Jing (Inventor); Meyyappan, Meyya (Inventor)
2008-01-01
A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.
Effect of skin-transmitted vibration enhancement on vibrotactile perception.
Tanaka, Yoshihiro; Ueda, Yuichiro; Sano, Akihito
2015-06-01
Vibration on skin elicited by the mechanical interaction of touch between the skin and an object propagates to skin far from the point of contact. This paper investigates the effect of skin-transmitted vibration on vibrotactile perception. To enhance the transmission of high-frequency vibration on the skin, stiff tape was attached to the skin so that the tape covered the bottom surface of the index finger from the periphery of the distal interphalangeal joint to the metacarpophalangeal joint. Two psychophysical experiments with high-frequency vibrotactile stimuli of 250 Hz were conducted. In the psychophysical experiments, discrimination and detection thresholds were estimated and compared between conditions of the presence or the absence of the tape (normal bare finger). A method of limits was applied for the detection threshold estimation, and the discrimination task using a reference stimulus and six test stimuli with different amplitudes was applied for the discrimination threshold estimation. The stimulation was given to bare fingertips of participants. Result showed that the detection threshold was enhanced by attaching the tape, and the discrimination threshold enhancement by attaching the tape was confirmed for participants who have relatively large discrimination threshold under normal bare finger. Then, skin-transmitted vibration was measured with an accelerometer with the psychophysical experiments. Result showed that the skin-transmitted vibration when the tape was attached to the skin was larger than that when normal bare skin. There is a correlation between the increase in skin-transmitted vibration and the enhancement of the discrimination threshold.
Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses
Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali
2017-01-01
Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Psychophysical measurements in children: challenges, pitfalls, and considerations.
Witton, Caroline; Talcott, Joel B; Henning, G Bruce
2017-01-01
Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig
2018-05-11
Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.
Estimation of frequency offset in mobile satellite modems
NASA Technical Reports Server (NTRS)
Cowley, W. G.; Rice, M.; Mclean, A. N.
1993-01-01
In mobilesat applications, frequency offset on the received signal must be estimated and removed prior to further modem processing. A straightforward method of estimating the carrier frequency offset is to raise the received MPSK signal to the M-th power, and then estimate the location of the peak spectral component. An analysis of the lower signal to noise threshold of this method is carried out for BPSK signals. Predicted thresholds are compared to simulation results. It is shown how the method can be extended to pi/M MPSK signals. A real-time implementation of frequency offset estimation for the Australian mobile satellite system is described.
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
Dennis R. Becker; Debra Larson; Eini C. Lowell; Robert B. Rummer
2008-01-01
The HCR (Harvest Cost-Revenue) Estimator is engineering and financial analysis software used to evaluate stand-level financial thresholds for harvesting small-diameter ponderosa pine (Pinus ponderosa Dougl. ex Laws.) in the Southwest United States. The Windows-based program helps contractors and planners to identify costs associated with tree...
Rodbard, David
2012-10-01
We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.
TU, Frank F.; EPSTEIN, Aliza E.; POZOLO, Kristen E.; SEXTON, Debra L.; MELNYK, Alexandra I.; HELLMAN, Kevin M.
2012-01-01
Objective Catheterization to measure bladder sensitivity is aversive and hinders human participation in visceral sensory research. Therefore, we sought to characterize the reliability of sonographically-estimated female bladder sensory thresholds. To demonstrate this technique’s usefulness, we examined the effects of self-reported dysmenorrhea on bladder pain thresholds. Methods Bladder sensory threshold volumes were determined during provoked natural diuresis in 49 healthy women (mean age 24 ± 8) using three-dimensional ultrasound. Cystometric thresholds (Vfs – first sensation, Vfu – first urge, Vmt – maximum tolerance) were quantified and related to bladder urgency and pain. We estimated reliability (one-week retest and interrater). Self-reported menstrual pain was examined in relationship to bladder pain, urgency and volume thresholds. Results Average bladder sensory thresholds (mLs) were Vfs (160±100), Vfu (310±130), and Vmt (500±180). Interrater reliability ranged from 0.97–0.99. One-week retest reliability was Vmt = 0.76 (95% CI 0.64–0.88), Vfs = 0.62 (95% CI 0.44–0.80), and Vfu = 0.63, (95% CI 0.47–0.80). Bladder filling rate correlated with all thresholds (r = 0.53–0.64, p < 0.0001). Women with moderate to severe dysmenorrhea pain had increased bladder pain and urgency at Vfs and increased pain at Vfu (p’s < 0.05). In contrast, dysmenorrhea pain was unrelated to bladder capacity. Discussion Sonographic estimates of bladder sensory thresholds were reproducible and reliable. In these healthy volunteers, dysmenorrhea was associated with increased bladder pain and urgency during filling but unrelated to capacity. Plausibly, dysmenorrhea sufferers may exhibit enhanced visceral mechanosensitivity, increasing their risk to develop chronic bladder pain syndromes. PMID:23370073
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
Ozcelik, O; Kelestimur, H
2004-01-01
Anaerobic threshold which describes the onset of systematic increase in blood lactate concentration is a widely used concept in clinical and sports medicine. A deflection point between heart rate-work rate has been introduced to determine the anaerobic threshold non-invasively. However, some researchers have consistently reported a heart rate deflection at higher work rates, while others have not. The present study was designed to investigate whether the heart rate deflection point accurately predicts the anaerobic threshold under the condition of acute hypoxia. Eight untrained males performed two incremental exercise tests using an electromagnetically braked cycle ergometer: one breathing room air and one breathing 12 % O2. The anaerobic threshold was estimated using the V-slope method and determined from the increase in blood lactate and the decrease in standard bicarbonate concentration. This threshold was also estimated by in the heart rate-work rate relationship. Not all subjects exhibited a heart rate deflection. Only two subjects in the control and four subjects in the hypoxia groups showed a heart rate deflection. Additionally, the heart rate deflection point overestimated the anaerobic threshold. In conclusion, the heart rate deflection point was not an accurate predictor of anaerobic threshold and acute hypoxia did not systematically affect the heart rate-work rate relationships.
Bayesian estimation of dose thresholds
NASA Technical Reports Server (NTRS)
Groer, P. G.; Carnes, B. A.
2003-01-01
An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.
Subaperture clutter filter with CFAR signal detection
Ormesher, Richard C.; Naething, Richard M.
2016-08-30
The various technologies presented herein relate to the determination of whether a received signal comprising radar clutter further comprises a communication signal. The communication signal can comprise of a preamble, a data symbol, communication data, etc. A first portion of the radar clutter is analyzed to determine a radar signature of the first portion of the radar clutter. A second portion of the radar clutter can be extracted based on the radar signature of the first portion. Following extraction, any residual signal can be analyzed to retrieve preamble data, etc. The received signal can be based upon a linear frequency modulation (e.g., a chirp modulation) whereby the chirp frequency can be determined and the frequency of transmission of the communication signal can be based accordingly thereon. The duration and/or bandwidth of the communication signal can be a portion of the duration and/or the bandwidth of the radar clutter.
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C
2012-02-01
It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).
Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers
Situ, Ping; Simpson, Trefford; Begley, Carolyn
2016-01-01
Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090
NASA Astrophysics Data System (ADS)
Kazanskiĭ, P. G.
1989-02-01
A threshold of photoinduced conversion of an ordinary wave into an extraordinary one was discovered for lithium niobate optical waveguides. The threshold intensity of the radiation was determined for waveguides prepared under different conditions. The experimental results were compared with theoretical estimates.
Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.
Rhudy, Jamie L; France, Christopher R
2011-07-01
The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.
Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó
2017-08-01
Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Baker, Simon; Priest, Patricia; Jackson, Rod
2000-01-01
Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to lower blood pressure in primary care. PMID:10710577
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N
2014-12-01
Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael
2015-01-01
To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, Daniel M.
2006-01-15
We tested the hypothesis that no threshold exists when estradiol acts through the same mechanism as an active endogenous estrogen. A Michaelis-Menten (MM) equation accounting for response saturation, background effects, and endogenous estrogen level fit a turtle sex-reversal data set with no threshold and estimated the endogenous dose. Additionally, 31 diverse literature dose-response data sets were analyzed by adding a term for nonhormonal background; good fits were obtained but endogenous dose estimations were not significant due to low resolving power. No thresholds were observed. Data sets were plotted using a normalized MM equation; all 178 data points were accommodated onmore » a single graph. Response rates from {approx}1% to >95% were well fit. The findings contradict the threshold assumption and low-dose safety. Calculating risk and assuming additivity of effects from multiple chemicals acting through the same mechanism rather than assuming a safe dose for nonthresholded curves is appropriate.« less
Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology
NASA Astrophysics Data System (ADS)
Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.
2015-01-01
The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
Estimating daily climatologies for climate indices derived from climate model data and observations
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof
2015-01-01
Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192
Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J
2016-07-01
To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)
2001-01-01
This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Calculating the dim light melatonin onset: the impact of threshold and sampling rate.
Molina, Thomas A; Burgess, Helen J
2011-10-01
The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.
NASA Astrophysics Data System (ADS)
Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo
2017-06-01
Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Estimating sensitivity and specificity for technology assessment based on observer studies.
Nishikawa, Robert M; Pesce, Lorenzo L
2013-07-01
The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.
Ellner, Stephen P; Holmes, Elizabeth E
2008-08-01
We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
A geographic analysis of population density thresholds in the influenza pandemic of 1918-19.
Chandra, Siddharth; Kassens-Noor, Eva; Kuljanin, Goran; Vertalka, Joshua
2013-02-20
Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918-19 in India, where over 15 million people died in the short span of less than one year. Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918-19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold.
A geographic analysis of population density thresholds in the influenza pandemic of 1918–19
2013-01-01
Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498
An adaptive threshold detector and channel parameter estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Arabshahi, P.; Mukai, R.; Yan, T. -Y.
2001-01-01
This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Smith, Stephen W.
2007-01-01
Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.
Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.
2014-01-01
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366
A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors
Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael
2015-01-01
The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.
Gilmour, Heather
2016-11-16
Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Dependence of cavitation, chemical effect, and mechanical effect thresholds on ultrasonic frequency.
Thanh Nguyen, Tam; Asakura, Yoshiyuki; Koda, Shinobu; Yasuda, Keiji
2017-11-01
Cavitation, chemical effect, and mechanical effect thresholds were investigated in wide frequency ranges from 22 to 4880kHz. Each threshold was measured in terms of sound pressure at fundamental frequency. Broadband noise emitted from acoustic cavitation bubbles was detected by a hydrophone to determine the cavitation threshold. Potassium iodide oxidation caused by acoustic cavitation was used to quantify the chemical effect threshold. The ultrasonic erosion of aluminum foil was conducted to estimate the mechanical effect threshold. The cavitation, chemical effect, and mechanical effect thresholds increased with increasing frequency. The chemical effect threshold was close to the cavitation threshold for all frequencies. At low frequency below 98kHz, the mechanical effect threshold was nearly equal to the cavitation threshold. However, the mechanical effect threshold was greatly higher than the cavitation threshold at high frequency. In addition, the thresholds of the second harmonic and the first ultraharmonic signals were measured to detect bubble occurrence. The threshold of the second harmonic approximated to the cavitation threshold below 1000kHz. On the other hand, the threshold of the first ultraharmonic was higher than the cavitation threshold below 98kHz and near to the cavitation threshold at high frequency. Copyright © 2017 Elsevier B.V. All rights reserved.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-08-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-01-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-01-01
The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
NASA Astrophysics Data System (ADS)
England, John F.; Salas, José D.; Jarrett, Robert D.
2003-09-01
The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.
Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E
2015-03-01
Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.
Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex
2016-01-01
Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561
Reliability and validity of a short form household food security scale in a Caribbean community.
Gulliford, Martin C; Mahabir, Deepak; Rocke, Brian
2004-06-16
We evaluated the reliability and validity of the short form household food security scale in a different setting from the one in which it was developed. The scale was interview administered to 531 subjects from 286 households in north central Trinidad in Trinidad and Tobago, West Indies. We evaluated the six items by fitting item response theory models to estimate item thresholds, estimating agreement among respondents in the same households and estimating the slope index of income-related inequality (SII) after adjusting for age, sex and ethnicity. Item-score correlations ranged from 0.52 to 0.79 and Cronbach's alpha was 0.87. Item responses gave within-household correlation coefficients ranging from 0.70 to 0.78. Estimated item thresholds (standard errors) from the Rasch model ranged from -2.027 (0.063) for the 'balanced meal' item to 2.251 (0.116) for the 'hungry' item. The 'balanced meal' item had the lowest threshold in each ethnic group even though there was evidence of differential functioning for this item by ethnicity. Relative thresholds of other items were generally consistent with US data. Estimation of the SII, comparing those at the bottom with those at the top of the income scale, gave relative odds for an affirmative response of 3.77 (95% confidence interval 1.40 to 10.2) for the lowest severity item, and 20.8 (2.67 to 162.5) for highest severity item. Food insecurity was associated with reduced consumption of green vegetables after additionally adjusting for income and education (0.52, 0.28 to 0.96). The household food security scale gives reliable and valid responses in this setting. Differing relative item thresholds compared with US data do not require alteration to the cut-points for classification of 'food insecurity without hunger' or 'food insecurity with hunger'. The data provide further evidence that re-evaluation of the 'balanced meal' item is required.
Complex Variation in Measures of General Intelligence and Cognitive Change
Rowe, Suzanne J.; Rowlatt, Amy; Davies, Gail; Harris, Sarah E.; Porteous, David J.; Liewald, David C.; McNeill, Geraldine; Starr, John M.
2013-01-01
Combining information from multiple SNPs may capture a greater amount of genetic variation than from the sum of individual SNP effects and help identifying missing heritability. Regions may capture variation from multiple common variants of small effect, multiple rare variants or a combination of both. We describe regional heritability mapping of human cognition. Measures of crystallised (gc) and fluid intelligence (gf) in late adulthood (64–79 years) were available for 1806 individuals genotyped for 549,692 autosomal single nucleotide polymorphisms (SNPs). The same individuals were tested at age 11, enabling us the rare opportunity to measure cognitive change across most of their lifespan. 547,750 SNPs ranked by position are divided into 10, 908 overlapping regions of 101 SNPs to estimate the genetic variance each region explains, an approach that resembles classical linkage methods. We also estimate the genetic variation explained by individual autosomes and by SNPs within genes. Empirical significance thresholds are estimated separately for each trait from whole genome scans of 500 permutated data sets. The 5% significance threshold for the likelihood ratio test of a single region ranged from 17–17.5 for the three traits. This is the equivalent to nominal significance under the expectation of a chi-squared distribution (between 1df and 0) of P<1.44×10−5. These thresholds indicate that the distribution of the likelihood ratio test from this type of variance component analysis should be estimated empirically. Furthermore, we show that estimates of variation explained by these regions can be grossly overestimated. After applying permutation thresholds, a region for gf on chromosome 5 spanning the PRRC1 gene is significant at a genome-wide 10% empirical threshold. Analysis of gene methylation on the temporal cortex provides support for the association of PRRC1 and fluid intelligence (P = 0.004), and provides a prime candidate gene for high throughput sequencing of these uniquely informative cohorts. PMID:24349040
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
NASA Technical Reports Server (NTRS)
Temkin, A.
1984-01-01
Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr
A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Energy thresholds of discrete breathers in thermal equilibrium and relaxation processes.
Ming, Yi; Ling, Dong-Bo; Li, Hui-Min; Ding, Ze-Jun
2017-06-01
So far, only the energy thresholds of single discrete breathers in nonlinear Hamiltonian systems have been analytically obtained. In this work, the energy thresholds of discrete breathers in thermal equilibrium and the energy thresholds of long-lived discrete breathers which can remain after a long time relaxation are analytically estimated for nonlinear chains. These energy thresholds are size dependent. The energy thresholds of discrete breathers in thermal equilibrium are the same as the previous analytical results for single discrete breathers. The energy thresholds of long-lived discrete breathers in relaxation processes are different from the previous results for single discrete breathers but agree well with the published numerical results known to us. Because real systems are either in thermal equilibrium or in relaxation processes, the obtained results could be important for experimental detection of discrete breathers.
Mechanisms of breathing instability in patients with obstructive sleep apnea.
Younes, Magdy; Ostrowski, Michele; Atkar, Raj; Laprairie, John; Siemens, Andrea; Hanly, Patrick
2007-12-01
The response to chemical stimuli (chemical responsiveness) and the increases in respiratory drive required for arousal (arousal threshold) and for opening the airway without arousal (effective recruitment threshold) are important determinants of ventilatory instability and, hence, severity of obstructive apnea. We measured these variables in 21 obstructive apnea patients (apnea-hypopnea index 91 +/- 24 h(-1)) while on continuous-positive-airway pressure. During sleep, pressure was intermittently reduced (dial down) to induce severe hypopneas. Dial downs were done on room air and following approximately 30 s of breathing hypercapneic and/or hypoxic mixtures, which induced a range of ventilatory stimulation before dial down. Ventilation just before dial down and flow during dial down were measured. Chemical responsiveness, estimated as the percent increase in ventilation during the 5(th) breath following administration of 6% CO(2) combined with approximately 4% desaturation, was large (187 +/- 117%). Arousal threshold, estimated as the percent increase in ventilation associated with a 50% probability of arousal, ranged from 40% to >268% and was <120% in 12/21 patients, indicating that in many patients arousal occurs with modest changes in chemical drive. Effective recruitment threshold, estimated as percent increase in pre-dial-down ventilation associated with a significant increase in dial-down flow, ranged from zero to >174% and was <110% in 12/21 patients, indicating that in many patients reflex dilatation occurs with modest increases in drive. The two thresholds were not correlated. In most OSA patients, airway patency may be maintained with only modest increases in chemical drive, but instability results because of a low arousal threshold and a brisk increase in drive following brief reduction in alveolar ventilation.
Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross
2012-01-01
Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.
2013-01-01
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479
Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A
2013-10-22
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.
Reasoning in psychosis: risky but not necessarily hasty.
Moritz, Steffen; Scheu, Florian; Andreou, Christina; Pfueller, Ute; Weisbrod, Matthias; Roesch-Ely, Daniela
2016-01-01
A liberal acceptance (LA) threshold for hypotheses has been put forward to explain the well-replicated "jumping to conclusions" (JTC) bias in psychosis, particularly in patients with paranoid symptoms. According to this account, schizophrenia patients rest their decisions on lower subjective probability estimates. The initial formulation of the LA account also predicts an absence of the JTC bias under high task ambiguity (i.e., if more than one response option surpasses the subjective acceptance threshold). Schizophrenia patients (n = 62) with current or former delusions and healthy controls (n = 30) were compared on six scenarios of a variant of the beads task paradigm. Decision-making was assessed under low and high task ambiguity. Along with decision judgments (optional), participants were required to provide probability estimates for each option in order to determine decision thresholds (i.e., the probability the individual deems sufficient for a decision). In line with the LA account, schizophrenia patients showed a lowered decision threshold compared to controls (82% vs. 93%) which predicted both more errors and less draws to decisions. Group differences on thresholds were comparable across conditions. At the same time, patients did not show hasty decision-making, reflecting overall lowered probability estimates in patients. Results confirm core predictions derived from the LA account. Our results may (partly) explain why hasty decision-making is sometimes aggravated and sometimes abolished in psychosis. The proneness to make risky decisions may contribute to the pathogenesis of psychosis. A revised LA account is put forward.
Holzgreve, Adrien; Brendel, Matthias; Gu, Song; Carlsen, Janette; Mille, Erik; Böning, Guido; Mastrella, Giorgia; Unterrainer, Marcus; Gildehaus, Franz J; Rominger, Axel; Bartenstein, Peter; Kälin, Roland E; Glass, Rainer; Albert, Nathalie L
2016-01-01
Noninvasive tumor growth monitoring is of particular interest for the evaluation of experimental glioma therapies. This study investigates the potential of positron emission tomography (PET) using O-(2-(18)F-fluoroethyl)-L-tyrosine ([(18)F]-FET) to determine tumor growth in a murine glioblastoma (GBM) model-including estimation of the biological tumor volume (BTV), which has hitherto not been investigated in the pre-clinical context. Fifteen GBM-bearing mice (GL261) and six control mice (shams) were investigated during 5 weeks by PET followed by autoradiographic and histological assessments. [(18)F]-FET PET was quantitated by calculation of maximum and mean standardized uptake values within a universal volume-of-interest (VOI) corrected for healthy background (SUVmax/BG, SUVmean/BG). A partial volume effect correction (PVEC) was applied in comparison to ex vivo autoradiography. BTVs obtained by predefined thresholds for VOI definition (SUV/BG: ≥1.4; ≥1.6; ≥1.8; ≥2.0) were compared to the histologically assessed tumor volume (n = 8). Finally, individual "optimal" thresholds for BTV definition best reflecting the histology were determined. In GBM mice SUVmax/BG and SUVmean/BG clearly increased with time, however at high inter-animal variability. No relevant [(18)F]-FET uptake was observed in shams. PVEC recovered signal loss of SUVmean/BG assessment in relation to autoradiography. BTV as estimated by predefined thresholds strongly differed from the histology volume. Strikingly, the individual "optimal" thresholds for BTV assessment correlated highly with SUVmax/BG (ρ = 0.97, p < 0.001), allowing SUVmax/BG-based calculation of individual thresholds. The method was verified by a subsequent validation study (n = 15, ρ = 0.88, p < 0.01) leading to extensively higher agreement of BTV estimations when compared to histology in contrast to predefined thresholds. [(18)F]-FET PET with standard SUV measurements is feasible for glioma imaging in the GBM mouse model. PVEC is beneficial to improve accuracy of [(18)F]-FET PET SUV quantification. Although SUVmax/BG and SUVmean/BG increase during the disease course, these parameters do not correlate with the respective tumor size. For the first time, we propose a histology-verified method allowing appropriate individual BTV estimation for volumetric in vivo monitoring of tumor growth with [(18)F]-FET PET and show that standardized thresholds from routine clinical practice seem to be inappropriate for BTV estimation in the GBM mouse model.
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Novel application of windowed beamforming function imaging for FLGPR
NASA Astrophysics Data System (ADS)
Xique, Ismael J.; Burns, Joseph W.; Thelen, Brian J.; LaRose, Ryan M.
2018-04-01
Backprojection of cross-correlated array data, using algorithms such as coherent interferometric imaging (Borcea, et al., 2006), has been advanced as a method to improve the statistical stability of images of targets in an inhomogeneous medium. Recently, the Windowed Beamforming Energy (WBE) function algorithm has been introduced as a functionally equivalent approach, which is significantly less computationally burdensome (Borcea, et al., 2011). WBE produces similar results through the use of a quadratic function summing signals after beamforming in transmission and reception, and windowing in the time domain. We investigate the application of WBE to improve the detection of buried targets with forward looking ground penetrating MIMO radar (FLGPR) data. The formulation of WBE as well the software implementation of WBE for the FLGPR data collection will be discussed. WBE imaging results are compared to standard backprojection and Coherence Factor imaging. Additionally, the effectiveness of WBE on field-collected data is demonstrated qualitatively through images and quantitatively through the use of a CFAR statistic on buried targets of a variety of contrast levels.
Psychophysical estimation of speed discrimination. II. Aging effects
NASA Astrophysics Data System (ADS)
Raghuram, Aparna; Lakshminarayanan, Vasudevan; Khanna, Ritu
2005-10-01
We studied the effects of aging on a speed discrimination task using a pair of first-order drifting luminance gratings. Two reference speeds of 2 and 8 deg/s were presented at stimulus durations of 500 ms and 1000 ms. The choice of stimulus parameters, etc., was determined in preliminary experiments and described in Part I. Thresholds were estimated using a two-alternative-forced-choice staircase methodology. Data were collected from 16 younger subjects (mean age 24 years) and 17 older subjects (mean age 71 years). Results showed that thresholds for speed discrimination were higher for the older age group. This was especially true at stimulus duration of 500 ms for both slower and faster speeds. This could be attributed to differences in temporal integration of speed with age. Visual acuity and contrast sensitivity were not statistically observed to mediate age differences in the speed discrimination thresholds. Gender differences were observed in the older age group, with older women having higher thresholds.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Threshold corrections to the bottom quark mass revisited
Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart
2015-05-19
Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less
Uncertainties in extreme surge level estimates from observational records.
van den Brink, H W; Können, G P; Opsteegh, J D
2005-06-15
Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Estimating soil moisture exceedance probability from antecedent rainfall
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.
2016-12-01
The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.
Denoising forced-choice detection data.
García-Pérez, Miguel A
2010-02-01
Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Temporal resolution in children.
Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D
1989-06-01
The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
Giblin, Shawn M.; Houser, Jeffrey N.; Sullivan, John F.; Langrehr, H.A.; Rogala, James T.; Campbell, Benjamin D.
2014-01-01
Duckweed and other free-floating plants (FFP) can form dense surface mats that affect ecosystem condition and processes, and can impair public use of aquatic resources. FFP obtain their nutrients from the water column, and the formation of dense FFP mats can be a consequence and indicator of river eutrophication. We conducted two complementary surveys of diverse aquatic areas of the Upper Mississippi River as an in situ approach for estimating thresholds in the response of FFP abundance to nutrient concentration and physical conditions in a large, floodplain river. Local regression analysis was used to estimate thresholds in the relations between FFP abundance and phosphorus (P) concentration (0.167 mg l−1L), nitrogen (N) concentration (0.808 mg l−1), water velocity (0.095 m s−1), and aquatic macrophyte abundance (65 % cover). FFP tissue concentrations suggested P limitation was more likely in spring, N limitation was more likely in late summer, and N limitation was most likely in backwaters with minimal hydraulic connection to the channel. The thresholds estimated here, along with observed patterns in nutrient limitation, provide river scientists and managers with criteria to consider when attempting to modify FFP abundance in off-channel areas of large river systems.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology.
Insel, Philip S; Mattsson, Niklas; Mackin, R Scott; Schöll, Michael; Nosheny, Rachel L; Tosun, Duygu; Donohue, Michael C; Aisen, Paul S; Jagust, William J; Weiner, Michael W
2016-05-17
To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. © 2016 American Academy of Neurology.
Modeling of digital mammograms using bicubic spline functions and additive noise
NASA Astrophysics Data System (ADS)
Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne
1998-09-01
The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Mattsson, Niklas; Mackin, R. Scott; Schöll, Michael; Nosheny, Rachel L.; Tosun, Duygu; Donohue, Michael C.; Aisen, Paul S.; Jagust, William J.; Weiner, Michael W.
2016-01-01
Objective: To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. PMID:27164667
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott; ...
2016-04-15
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent
2005-01-01
Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
Dotan, Raffy
2012-06-01
The multisession maximal lactate steady-state (MLSS) test is the gold standard for anaerobic threshold (AnT) estimation. However, it is highly impractical, requires high fitness level, and suffers additional shortcomings. Existing single-session AnT-estimating tests are of compromised validity, reliability, and resolution. The presented reverse lactate threshold test (RLT) is a single-session, AnT-estimating test, aimed at avoiding the pitfalls of existing tests. It is based on the novel concept of identifying blood lactate's maximal appearance-disappearance equilibrium by approaching the AnT from higher, rather than from lower exercise intensities. Rowing, cycling, and running case data (4 recreational and competitive athletes, male and female, aged 17-39 y) are presented. Subjects performed the RLT test and, on a separate session, a single 30-min MLSS-type verification test at the RLT-determined intensity. The RLT and its MLSS verification exhibited exceptional agreement at 0.5% discrepancy or better. The RLT's training sensitivity was demonstrated by a case of 2.5-mo training regimen following which the RLT's 15-W improvement was fully MLSS-verified. The RLT's test-retest reliability was examined in 10 trained and untrained subjects. Test 2 differed from test 1 by only 0.3% with an intraclass correlation of 0.997. The data suggest RLT to accurately and reliably estimate AnT (as represented by MLSS verification) with high resolution and in distinctly different sports and to be sensitive to training adaptations. Compared with MLSS, the single-session RLT is highly practical and its lower fitness requirements make it applicable to athletes and untrained individuals alike. Further research is needed to establish RLT's validity and accuracy in larger samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, Yolanda D., E-mail: ydtseng@partners.org; Krishnan, Monica S.; Sullivan, Adam J.
2013-11-01
Purpose: We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations. Methods and Materials: A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate whenmore » within the 95% confidence interval of median survival estimates from an established prognostic model. Results: Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%-97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model. Conclusions: Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed.« less
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
A simple method to estimate threshold friction velocity of wind erosion in the field
USDA-ARS?s Scientific Manuscript database
Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...
Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko
2018-05-31
The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.
2015-06-01
Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.
Measurement of visual contrast sensitivity
NASA Astrophysics Data System (ADS)
Vongierke, H. E.; Marko, A. R.
1985-04-01
This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.
Threshold altitude resulting in decompression sickness
NASA Technical Reports Server (NTRS)
Kumar, K. V.; Waligora, James M.; Calkins, Dick S.
1990-01-01
A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.
MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples
DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
Gas Composition Sensing Using Carbon Nanotube Arrays
NASA Technical Reports Server (NTRS)
Li, Jing; Meyyappan, Meyya
2012-01-01
This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.
NASA Astrophysics Data System (ADS)
Bitenc, M.; Kieffer, D. S.; Khoshelham, K.
2015-08-01
The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.
At what costs will screening with CT colonography be competitive? A cost-effectiveness approach.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Zauber, Ann G; Boer, Rob; Wilschut, Janneke; Habbema, J Dik F
2009-03-01
The costs of computed tomographic colonography (CTC) are not yet established for screening use. In our study, we estimated the threshold costs for which CTC screening would be a cost-effective alternative to colonoscopy for colorectal cancer (CRC) screening in the general population. We used the MISCAN-colon microsimulation model to estimate the costs and life-years gained of screening persons aged 50-80 years for 4 screening strategies: (i) optical colonoscopy; and CTC with referral to optical colonoscopy of (ii) any suspected polyp; (iii) a suspected polyp >or=6 mm and (iv) a suspected polyp >or=10 mm. For each of the 4 strategies, screen intervals of 5, 10, 15 and 20 years were considered. Subsequently, for each CTC strategy and interval, the threshold costs of CTC were calculated. We performed a sensitivity analysis to assess the effect of uncertain model parameters on the threshold costs. With equal costs ($662), optical colonoscopy dominated CTC screening. For CTC to gain similar life-years as colonoscopy screening every 10 years, it should be offered every 5 years with referral of polyps >or=6 mm. For this strategy to be as cost-effective as colonoscopy screening, the costs must not exceed $285 or 43% of colonoscopy costs (range in sensitivity analysis: 39-47%). With 25% higher adherence than colonoscopy, CTC threshold costs could be 71% of colonoscopy costs. Our estimate of 43% is considerably lower than previous estimates in literature, because previous studies only compared CTC screening to 10-yearly colonoscopy, where we compared to different intervals of colonoscopy screening.
Schousboe, J T; Gourlay, M; Fink, H A; Taylor, B C; Orwoll, E S; Barrett-Connor, E; Melton, L J; Cummings, S R; Ensrud, K E
2013-01-01
We used a microsimulation model to estimate the threshold body weights at which screening bone densitometry is cost-effective. Among women aged 55-65 years and men aged 55-75 years without a prior fracture, body weight can be used to identify those for whom bone densitometry is cost-effective. Bone densitometry may be more cost-effective for those with lower body weight since the prevalence of osteoporosis is higher for those with low body weight. Our purpose was to estimate weight thresholds below which bone densitometry is cost-effective for women and men without a prior clinical fracture at ages 55, 60, 65, 75, and 80 years. We used a microsimulation model to estimate the costs and health benefits of bone densitometry and 5 years of fracture prevention therapy for those without prior fracture but with femoral neck osteoporosis (T-score ≤ -2.5) and a 10-year hip fracture risk of ≥3%. Threshold pre-test probabilities of low BMD warranting drug therapy at which bone densitometry is cost-effective were calculated. Corresponding body weight thresholds were estimated using data from the Study of Osteoporotic Fractures (SOF), the Osteoporotic Fractures in Men (MrOS) study, and the National Health and Nutrition Examination Survey (NHANES) for 2005-2006. Assuming a willingness to pay of $75,000 per quality adjusted life year (QALY) and drug cost of $500/year, body weight thresholds below which bone densitometry is cost-effective for those without a prior fracture were 74, 90, and 100 kg, respectively, for women aged 55, 65, and 80 years; and were 67, 101, and 108 kg, respectively, for men aged 55, 75, and 80 years. For women aged 55-65 years and men aged 55-75 years without a prior fracture, body weight can be used to select those for whom bone densitometry is cost-effective.
Massof, Robert W
2014-10-01
A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Clayson, Peter E; Miller, Gregory A
2017-01-01
Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Dispersive estimates for massive Dirac operators in dimension two
NASA Astrophysics Data System (ADS)
Erdoğan, M. Burak; Green, William R.; Toprak, Ebru
2018-05-01
We study the massive two dimensional Dirac operator with an electric potential. In particular, we show that the t-1 decay rate holds in the L1 →L∞ setting if the threshold energies are regular. We also show these bounds hold in the presence of s-wave resonances at the threshold. We further show that, if the threshold energies are regular then a faster decay rate of t-1(log t) - 2 is attained for large t, at the cost of logarithmic spatial weights. The free Dirac equation does not satisfy this bound due to the s-wave resonances at the threshold energies.
Implementation guide for turbidity threshold sampling: principles, procedures, and analysis
Jack Lewis; Rand Eads
2009-01-01
Turbidity Threshold Sampling uses real-time turbidity and river stage information to automatically collect water quality samples for estimating suspended sediment loads. The system uses a programmable data logger in conjunction with a stage measurement device, a turbidity sensor, and a pumping sampler. Specialized software enables the user to control the sampling...
Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi
2017-02-21
The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China . 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research.
Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi
2017-01-01
The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China. 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research. PMID:28230810
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The hockey-stick method to estimate evening dim light melatonin onset (DLMO) in humans.
Danilenko, Konstantin V; Verevkin, Evgeniy G; Antyufeev, Viktor S; Wirz-Justice, Anna; Cajochen, Christian
2014-04-01
The onset of melatonin secretion in the evening is the most reliable and most widely used index of circadian timing in humans. Saliva (or plasma) is usually sampled every 0.5-1 hours under dim-light conditions in the evening 5-6 hours before usual bedtime to assess the dim-light melatonin onset (DLMO). For many years, attempts have been made to find a reliable objective determination of melatonin onset time either by fixed or dynamic threshold approaches. The here-developed hockey-stick algorithm, used as an interactive computer-based approach, fits the evening melatonin profile by a piecewise linear-parabolic function represented as a straight line switching to the branch of a parabola. The switch point is considered to reliably estimate melatonin rise time. We applied the hockey-stick method to 109 half-hourly melatonin profiles to assess the DLMOs and compared these estimates to visual ratings from three experts in the field. The DLMOs of 103 profiles were considered to be clearly quantifiable. The hockey-stick DLMO estimates were on average 4 minutes earlier than the experts' estimates, with a range of -27 to +13 minutes; in 47% of the cases the difference fell within ±5 minutes, in 98% within -20 to +13 minutes. The raters' and hockey-stick estimates showed poor accordance with DLMOs defined by threshold methods. Thus, the hockey-stick algorithm is a reliable objective method to estimate melatonin rise time, which does not depend on a threshold value and is free from errors arising from differences in subjective circadian phase estimates. The method is available as a computerized program that can be easily used in research settings and clinical practice either for salivary or plasma melatonin values.
Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.
Cox, Louis Anthony Tony
2018-07-01
Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua D.; Hartse, Hans
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
Miles, Jeffrey Hilton
2011-05-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise.
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2018-01-01
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
Rare event computation in deterministic chaotic systems using genealogical particle analysis
NASA Astrophysics Data System (ADS)
Wouters, J.; Bouchet, F.
2016-09-01
In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David
2003-06-01
To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.
NASA Astrophysics Data System (ADS)
Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola
2018-03-01
We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.
Thermal sensitivity and cardiovascular reactivity to stress in healthy males.
Conde-Guzón, Pablo Antonio; Bartolomé-Albistegui, María Teresa; Quirós, Pilar; Cabestrero, Raúl
2011-11-01
This paper examines the association of cardiovascular reactivity with thermal thresholds (detection and unpleasantness). Heart period (HP), systolic (SBP) and diastolic (DBP) blood pressure of 42 health young males were recorded during a cardiovascular reactivity task (a videogame based upon Sidman's avoidance paradigm). Thermal sensitivity, assessing detection and unpleasantness thresholds with radiant heat in the forearm was also estimated for participants. Participants with differential scores in the cardiovascular variables from base line to task > or = P65 were considered as reactors and those how have differential scores < or = P35 were considered as non-reactors. Significant differences were observed between groups in the unpleasantness thresholds in blood pressure (BP) but not in HP. Reactors exhibited significant higher unpleasantness thresholds than non-reactors. No significant differences were obtained in detection thresholds between groups.
Focks, D A; Brenner, R J; Hayes, J; Daniels, E
2000-01-01
The expense and ineffectiveness of drift-based insecticide aerosols to control dengue epidemics has led to suppression strategies based on eliminating larval breeding sites. With the notable but short-lived exceptions of Cuba and Singapore, these source reduction efforts have met with little documented success; failure has chiefly been attributed to inadequate participation of the communities involved. The present work attempts to estimate transmission thresholds for dengue based on an easily-derived statistic, the standing crop of Aedes aegypti pupae per person in the environment. We have developed these thresholds for use in the assessment of risk of transmission and to provide targets for the actual degree of suppression required to prevent or eliminate transmission in source reduction programs. The notion of thresholds is based on 2 concepts: the mass action principal-the course of an epidemic is dependent on the rate of contact between susceptible hosts and infectious vectors, and threshold theory-the introduction of a few infectious individuals into a community of susceptible individuals will not give rise to an outbreak unless the density of vectors exceeds a certain critical level. We use validated transmission models to estimate thresholds as a function of levels of pre-existing antibody levels in human populations, ambient air temperatures, and size and frequency of viral introduction. Threshold levels were estimated to range between about 0.5 and 1.5 Ae. aegypti pupae per person for ambient air temperatures of 28 degrees C and initial seroprevalences ranging between 0% to 67%. Surprisingly, the size of the viral introduction used in these studies, ranging between 1 and 12 infectious individuals per year, was not seen to significantly influence the magnitude of the threshold. From a control perspective, these results are not particularly encouraging. The ratio of Ae. aegypti pupae to human density has been observed in limited field studies to range between 0.3 and >60 in 25 sites in dengue-endemic or dengue-susceptible areas in the Caribbean, Central America, and Southeast Asia. If, for purposes of illustration, we assume an initial seroprevalence of 33%, the degree of suppression required to essentially eliminate the possibility of summertime transmission in Puerto Rico, Honduras, and Bangkok, Thailand was estimated to range between 10% and 83%; however in Mexico and Trinidad, reductions of >90% would be required. A clearer picture of the actual magnitude of the reductions required to eliminate the threat of transmission is provided by the ratio of the observed standing crop of Ae. aegypti pupae per person and the threshold. For example, in a site in Mayaguez, Puerto Rico, the ratio of observed and threshold was 1.7, meaning roughly that about 7 of every 17 breeding containers would have to be eliminated. For Reynosa, Mexico, with a ratio of approximately 10, 9 of every 10 containers would have to be eliminated. For sites in Trinidad with ratios averaging approximately 25, the elimination of 24 of every 25 would be required. With the exceptions of Cuba and Singapore, no published reports of sustained source reduction efforts have achieved anything near these levels of reductions in breeding containers. Practical advice on the use of thresholds is provided for operational control projects.
MacNeilage, Paul R.; Turner, Amanda H.
2010-01-01
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843
Classification criteria and probability risk maps: limitations and perspectives.
Saisana, Michaela; Dubois, Gregoire; Chaloulakou, Archontoula; Spyrellis, Nikolas
2004-03-01
Delineation of polluted zones with respect to regulatory standards, accounting at the same time for the uncertainty of the estimated concentrations, relies on classification criteria that can lead to significantly different pollution risk maps, which, in turn, can depend on the regulatory standard itself. This paper reviews four popular classification criteria related to the violation of a probability threshold or a physical threshold, using annual (1996-2000) nitrogen dioxide concentrations from 40 air monitoring stations in Milan. The relative advantages and practical limitations of each criterion are discussed, and it is shown that some of the criteria are more appropriate for the problem at hand and that the choice of the criterion can be supported by the statistical distribution of the data and/or the regulatory standard. Finally, the polluted area is estimated over the different years and concentration thresholds using the appropriate risk maps as an additional source of uncertainty.
Psychophysical estimation of the effects of aging on direction-of-heading judgments
NASA Astrophysics Data System (ADS)
Raghuram, Aparna; Lakshminarayanan, Vasudevan
2011-11-01
We conducted psychophysical experiments on direction-of-heading judgments using old and young subjects. Subjects estimated heading directions on a translation perpendicular to the vertical plane (frontoparallel); we found that heading judgments were affected by age. Increasing the random dot density in the stimulus from 24 to 400 dots did not improve threshold significantly. Older subjects started performing worse at the highest dots condition of 400. The speed of the radial motion was important, as heading judgments with slower radial motion were difficult to judge than faster radial motion, as the focus of expansion was easier to locate owing to the larger displacement of dots. Gender differences indicated that older women had a higher threshold than older men. This was only significant for the faster simulated radial speed. A general trend of women having a higher threshold than men was noticed.
Adami, Silvano; Bertoldo, Francesco; Gatti, Davide; Minisola, Giovanni; Rossini, Maurizio; Sinigaglia, Luigi; Varenna, Massimo
2013-09-01
The definition of osteoporosis was based for several years on bone mineral density values, which were used by most guidelines for defining treatment thresholds. The availability of tools for the estimation of fracture risk, such as FRAX™ or its adapted Italian version, DeFRA, is providing a way to grade osteoporosis severity. By applying these new tools, the criteria identified in Italy for treatment reimbursability (e.g., "Nota 79") are confirmed as extremely conservative. The new fracture risk-assessment tools provide continuous risk values that can be used by health authorities (or "payers") for identifying treatment thresholds. FRAX estimates the risk for "major osteoporotic fractures," which are not counted in registered fracture trials. Here, we elaborate an algorithm to convert vertebral and nonvertebral fractures to the "major fractures" of FRAX, and this allows a cost-effectiveness assessment for each drug.
Zhao, Tuo; Liu, Han
2016-01-01
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430
ProUCL Version 4.0 Technical Guide
Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...
Detection Thresholds of Falling Snow from Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Jackson, Gail
2012-01-01
Precipitation, including rain and snow, is a critical part of the Earth's energy and hydrology cycles. In order to collect information on the complete global precipitation cycle and to understand the energy budget in terms of precipitation, uniform global estimates of both liquid and frozen precipitation must be collected. Active observations of falling snow are somewhat easier to estimate since the radar will detect the precipitation particles and one only needs to know surface temperature to determine if it is liquid rain or snow. The challenges of estimating falling snow from passive spaceborne observations still exist though progress is being made. While these challenges are still being addressed, knowledge of their impact on expected retrieval results is an important key for understanding falling snow retrieval estimations. Important information to assess falling snow retrievals includes knowing thresholds of detection for active and passive sensors, various sensor channel configurations, snow event system characteristics, snowflake particle assumptions, and surface types. For example, can a lake effect snow system with low (2.5 km) cloud tops having an ice water content (Iwe) at the surface of 0.25 g m-3 and dendrite snowflakes be detected? If this information is known, we can focus retrieval efforts on detectable storms and concentrate advances on achievable results. Here, the focus is to determine thresholds of detection for falling snow for various snow conditions over land and lake surfaces. The analysis relies on simulated Weather Research Forecasting (WRF) simulations of falling snow cases since simulations provide all the information to determine the measurements from space and the ground truth. Results are presented for active radar at Ku, Ka, and W-band and for passive radiometer channels from 10 to 183 GHz (Skofronick-Jackson, et al. submitted to IEEE TGRS, April 2012). The notable results show: (1) the W-Band radar has detection thresholds more than an order of magnitude lower than the future GPM sensors, (2) the cloud structure macrophysics influences the thresholds of detection for passive channels, (3) the snowflake microphysics plays a large role in the detection threshold for active and passive instruments, (4) with reasonable assumptions, the passive 166 GHz channel has detection threshold values comparable to the GPM DPR Ku and Ka band radars with 0.05 g m-3 detected at the surface, or an 0.5-1 mm hr-l melted snow rate (equivalent to 0.5-2 cm hr-l solid fluffy snowflake rate).
Decroo, Tom; Henríquez-Trujillo, Aquiles R; De Weggheleire, Anja; Lynen, Lutgarde
2017-10-11
A recently published Ugandan study on tuberculosis (TB) diagnosis in HIV-positive patients with presumptive smear-negative TB, which showed that out of 90 patients who started TB treatment, 20% (18/90) had a positive Xpert MTB/RIF (Xpert) test, 24% (22/90) had a negative Xpert test, and 56% (50/90) were started without Xpert testing. Although Xpert testing was available, clinicians did not use it systematically. Here we aim to show more objectively the process of clinical decision-making. First, we estimated that pre-test probability of TB, or the prevalence of TB in smear-negative HIV infected patients with signs of presumptive TB in Uganda, was 17%. Second, we argue that the treatment threshold, the probability of disease at which the utility of treating and not treating is the same, and above which treatment should be started, should be determined. In Uganda, the treatment threshold was not yet formally established. In Rwanda, the calculated treatment threshold was 12%. Hence, one could argue that the threshold was reached without even considering additional tests. Still, Xpert testing can be useful when the probability of disease is above the treatment threshold, but only when a negative Xpert result can lower the probability of disease enough to cross the treatment threshold. This occurs when the pre-test probability is lower than the test-treat threshold, the probability of disease at which the utility of testing and the utility of treating without testing is the same. We estimated that the test-treatment threshold was 28%. Finally, to show the effect of the presence or absence of arguments on the probability of TB, we use confirming and excluding power, and a log10 odds scale to combine arguments. If the pre-test probability is above the test-treat threshold, empirical treatment is justified, because even a negative Xpert will not lower the post-test probability below the treatment threshold. However, Xpert testing for the diagnosis of TB should be performed in patients for whom the probability of TB was lower than the test-treat threshold. Especially in resource constrained settings clinicians should be encouraged to take clinical decisions and use scarce resources rationally.
Vocabulary Acquisition in L2: Does CALL Really Help?
ERIC Educational Resources Information Center
Averianova, Irina
2015-01-01
Language competence in various communicative activities in L2 largely depends on the learners' size of vocabulary. The target vocabulary of adult L2 learners should be between 2,000 high frequency words (a critical threshold) and 10,000 word families (for comprehension of university texts). For a TOEIC test, the threshold is estimated to be…
Experimental evidence of a pathogen invasion threshold
Krkošek, Martin
2018-01-01
Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph
2013-01-01
There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.
Lactate threshold by muscle electrical impedance in professional rowers
NASA Astrophysics Data System (ADS)
Jotta, B.; Coutinho, A. B. B.; Pino, A. V.; Souza, M. N.
2017-04-01
Lactate threshold (LT) is one of the physiological parameters usually used in rowing sport training prescription because it indicates the transitions from aerobic to anaerobic metabolism. Assessment of LT is classically based on a series of values of blood lactate concentrations obtained during progressive exercise tests and thus has an invasive aspect. The feasibility of noninvasive LT estimative through bioelectrical impedance spectroscopy (BIS) data collected in thigh muscles during rowing ergometer exercise tests was investigated. Nineteen professional rowers, age 19 (mean) ± 4.8 (standard deviation) yr, height 187.3 ± 6.6 cm, body mass 83 ± 7.7 kg, and training experience of 7 ± 4 yr, were evaluated in a rowing ergometer progressive test with paired measures of blood lactate concentration and BIS in thigh muscles. Bioelectrical impedance data were obtained by using a bipolar method of spectroscopy based on the current response to a voltage step. An electrical model was used to interpret BIS data and to derive parameters that were investigated to estimate LT noninvasively. From the serial blood lactate measurements, LT was also determined through Dmax method (LTDmax). The zero crossing of the second derivative of kinetic of the capacitance electrode (Ce), one of the BIS parameters, was used to estimate LT. The agreement between the LT estimates through BIS (LTBIS) and through Dmax method (LTDmax) was evaluated using Bland-Altman plots, leading to a mean difference between the estimates of just 0.07 W and a Pearson correlation coefficient r = 0.85. This result supports the utilization of the proposed method based on BIS parameters for estimating noninvasively the lactate threshold in rowing.
Skin cancer incidence among atomic bomb survivors from 1958 to 1996.
Sugiyama, Hiromi; Misumi, Munechika; Kishikawa, Masao; Iseki, Masachika; Yonehara, Shuji; Hayashi, Tomayoshi; Soda, Midori; Tokuoka, Shoji; Shimizu, Yukiko; Sakata, Ritsu; Grant, Eric J; Kasagi, Fumiyoshi; Mabuchi, Kiyohiko; Suyama, Akihiko; Ozasa, Kotaro
2014-05-01
The radiation risk of skin cancer by histological types has been evaluated in the atomic bomb survivors. We examined 80,158 of the 120,321 cohort members who had their radiation dose estimated by the latest dosimetry system (DS02). Potential skin tumors diagnosed from 1958 to 1996 were reviewed by a panel of pathologists, and radiation risk of the first primary skin cancer was analyzed by histological types using a Poisson regression model. A significant excess relative risk (ERR) of basal cell carcinoma (BCC) (n = 123) was estimated at 1 Gy (0.74, 95% confidence interval (CI): 0.26, 1.6) for those age 30 at exposure and age 70 at observation based on a linear-threshold model with a threshold dose of 0.63 Gy (95% CI: 0.32, 0.89) and a slope of 2.0 (95% CI: 0.69, 4.3). The estimated risks were 15, 5.7, 1.3 and 0.9 for age at exposure of 0-9, 10-19, 20-39, over 40 years, respectively, and the risk increased 11% with each one-year decrease in age at exposure. The ERR for squamous cell carcinoma (SCC) in situ (n = 64) using a linear model was estimated as 0.71 (95% CI: 0.063, 1.9). However, there were no significant dose responses for malignant melanoma (n = 10), SCC (n = 114), Paget disease (n = 10) or other skin cancers (n = 15). The significant linear radiation risk for BCC with a threshold at 0.63 Gy suggested that the basal cells of the epidermis had a threshold sensitivity to ionizing radiation, especially for young persons at the time of exposure.
Geostatistical risk estimation at waste disposal sites in the presence of hot spots.
Komnitsas, Kostas; Modis, Kostas
2009-05-30
The present paper aims to estimate risk by using geostatistics at the wider coal mining/waste disposal site of Belkovskaya, Tula region, in Russia. In this area the presence of hot spots causes a spatial trend in the mean value of the random field and a non-Gaussian data distribution. Prior to application of geostatistics, subtraction of trend and appropriate smoothing and transformation of the data into a Gaussian form were carried out; risk maps were then generated for the wider study area in order to assess the probability of exceeding risk thresholds. Finally, the present paper discusses the need for homogenization of soil risk thresholds regarding hazardous elements that will enhance reliability of risk estimation and enable application of appropriate rehabilitation actions in contaminated areas.
Morignat, Eric; Gay, Emilie; Vinard, Jean-Luc; Calavas, Didier; Hénaux, Viviane
2015-07-01
In the context of climate change, the frequency and severity of extreme weather events are expected to increase in temperate regions, and potentially have a severe impact on farmed cattle through production losses or deaths. In this study, we used distributed lag non-linear models to describe and quantify the relationship between a temperature-humidity index (THI) and cattle mortality in 12 areas in France. THI incorporates the effects of both temperature and relative humidity and was already used to quantify the degree of heat stress on dairy cattle because it does reflect physical stress deriving from extreme conditions better than air temperature alone. Relationships between daily THI and mortality were modeled separately for dairy and beef cattle during the 2003-2006 period. Our general approach was to first determine the shape of the THI-mortality relationship in each area by modeling THI with natural cubic splines. We then modeled each relationship assuming a three-piecewise linear function, to estimate the critical cold and heat THI thresholds, for each area, delimiting the thermoneutral zone (i.e. where the risk of death is at its minimum), and the cold and heat effects below and above these thresholds, respectively. Area-specific estimates of the cold or heat effects were then combined in a hierarchical Bayesian model to compute the pooled effects of THI increase or decrease on dairy and beef cattle mortality. A U-shaped relationship, indicating a mortality increase below the cold threshold and above the heat threshold was found in most of the study areas for dairy and beef cattle. The pooled estimate of the mortality risk associated with a 1°C decrease in THI below the cold threshold was 5.0% for dairy cattle [95% posterior interval: 4.4, 5.5] and 4.4% for beef cattle [2.0, 6.5]. The pooled mortality risk associated with a 1°C increase above the hot threshold was estimated to be 5.6% [5.0, 6.2] for dairy and 4.6% [0.9, 8.7] for beef cattle. Knowing the thermoneutral zone and temperature effects outside this zone is of primary interest for farmers because it can help determine when to implement appropriate preventive and mitigation measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating Allee dynamics before they can be observed: polar bears as a case study.
Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.
Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study
Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306
What to Do about Zero Frequency Cells when Estimating Polychoric Correlations
ERIC Educational Resources Information Center
Savalei, Victoria
2011-01-01
Categorical structural equation modeling (SEM) methods that fit the model to estimated polychoric correlations have become popular in the social sciences. When population thresholds are high in absolute value, contingency tables in small samples are likely to contain zero frequency cells. Such cells make the estimation of the polychoric…
Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.
2017-01-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Archambault, L.; Starkschall, G.
2007-11-15
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less
Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S
2007-11-01
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.
Climate Change, Population Immunity, and Hyperendemicity in the Transmission Threshold of Dengue
Oki, Mika; Yamamoto, Taro
2012-01-01
Background It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R0), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R0 value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist–a state known as hyperendemicity–and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R0. We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. Methods and Findings We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. Conclusions The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas. PMID:23144746
Climate change, population immunity, and hyperendemicity in the transmission threshold of dengue.
Oki, Mika; Yamamoto, Taro
2012-01-01
It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R(0)), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R(0) value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist-a state known as hyperendemicity-and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R(0). We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas.
Dhir, Ashish; Rogawski, Michael A
2018-05-01
Diazepam, administered by the intravenous, oral, or rectal routes, is widely used for the management of acute seizures. Dosage forms for delivery of diazepam by other routes of administration, including intranasal, intramuscular, and transbuccal, are under investigation. In predicting what dosages are necessary to terminate seizures, the minimal exposure required to confer seizure protection must be known. Here we administered diazepam by continuous intravenous infusion to obtain near-steady-state levels, which allowed an assessment of the minimal levels that elevate seizure threshold. The thresholds for various behavioral seizure signs (myoclonic jerk, clonus, and tonus) were determined with the timed intravenous pentylenetetrazol seizure threshold test in rats. Diazepam was administered to freely moving animals by continuous intravenous infusion via an indwelling jugular vein cannula. Blood samples for assay of plasma levels of diazepam and metabolites were recovered via an indwelling cannula in the contralateral jugular vein. The pharmacokinetic parameters of diazepam following a single 80-μg/kg intravenous bolus injection were determined using a noncompartmental pharmacokinetic approach. The derived parameters V d , CL, t 1/2α (distribution half-life) and t 1/2β (terminal half-life) for diazepam were, respectively, 608 mL, 22.1 mL/min, 13.7 minutes, and 76.8 minutes, respectively. Various doses of diazepam were continuously infused without or with an initial loading dose. At the end of the infusions, the thresholds for various behavioral seizure signs were determined. The minimal plasma diazepam concentration associated with threshold elevations was estimated at approximately 70 ng/mL. The active metabolites nordiazepam, oxazepam, and temazepam achieved levels that are expected to make only minor contributions to the threshold elevations. Diazepam elevates seizure threshold at steady-state plasma concentrations lower than previously recognized. The minimally effective plasma concentration provides a reference that may be considered when estimating the diazepam exposure required for acute seizure treatment. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
2009-01-01
Background Airports represent a complex source type of increasing importance contributing to air toxics risks. Comprehensive atmospheric dispersion models are beyond the scope of many applications, so it would be valuable to rapidly but accurately characterize the risk-relevant exposure implications of emissions at an airport. Methods In this study, we apply a high resolution atmospheric dispersion model (AERMOD) to 32 airports across the United States, focusing on benzene, 1,3-butadiene, and benzo [a]pyrene. We estimate the emission rates required at these airports to exceed a 10-6 lifetime cancer risk for the maximally exposed individual (emission thresholds) and estimate the total population risk at these emission rates. Results The emission thresholds vary by two orders of magnitude across airports, with variability predicted by proximity of populations to the airport and mixing height (R2 = 0.74–0.75 across pollutants). At these emission thresholds, the population risk within 50 km of the airport varies by two orders of magnitude across airports, driven by substantial heterogeneity in total population exposure per unit emissions that is related to population density and uncorrelated with emission thresholds. Conclusion Our findings indicate that site characteristics can be used to accurately predict maximum individual risk and total population risk at a given level of emissions, but that optimizing on one endpoint will be non-optimal for the other. PMID:19426510
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
System and method for quench and over-current protection of superconductor
Huang, Xianrui; Laskaris, Evangelos Trifon; Sivasubramaniam, Kiruba Haran; Bray, James William; Ryan, David Thomas; Fogarty, James Michael; Steinbach, Albert Eugene
2005-05-31
A system and method for protecting a superconductor. The system may comprise a current sensor operable to detect a current flowing through the superconductor. The system may comprise a coolant temperature sensor operable to detect the temperature of a cryogenic coolant used to cool the superconductor to a superconductive state. The control circuit is operable to estimate the superconductor temperature based on the current flow and the coolant temperature. The system may also be operable to compare the estimated superconductor temperature to at least one threshold temperature and to initiate a corrective action when the superconductor temperature exceeds the at least one threshold temperature.
Grading smart sensors: Performance assessment and ranking using familiar scores like A+ to D-
NASA Astrophysics Data System (ADS)
Kessel, Ronald T.
2005-03-01
Starting with the supposition that the product of smart sensors - whether autonomous, networked, or fused - is in all cases information, it is shown here using information theory how a metric Q, ranging between 0 and 100%, can be derived to assess the quality of the information provided. The analogy with student grades is immediately evident and elaborated. As with student grades, numerical percentages suggest more precision than can be justified, so a conversion to letter grades A+ to D- is desirable. Owing to the close analogy with familiar academic grades, moreover, the new grade is a measure of effectiveness (MOE) that commanders and decision makers should immediately appreciate and find quite natural, even if they do not care to follow the methodology behind the performance test, as they focus on higher-level strategic matters of sensor deployment or procurement. The metric is illustrated by translating three specialist performance tests - the Receiver Operating Characteristic (ROC) curve, the Constant False Alarm Rate (CFAR) approach, and confusion matrices - into letter grades for use then by strategists. Actual military and security systems are included among the examples.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
Flood return level analysis of Peaks over Threshold series under changing climate
NASA Astrophysics Data System (ADS)
Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.
2016-12-01
Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.
The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.
1994-01-01
In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.
ERIC Educational Resources Information Center
Besser, Jana; Zekveld, Adriana A.; Kramer, Sophia E.; Ronnberg, Jerker; Festen, Joost M.
2012-01-01
Purpose: In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. Method: The…
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less
An adaptive design for updating the threshold value of a continuous biomarker.
Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2016-11-30
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F
2017-04-01
A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding values for hoof lesion resistance corresponded to sires with a higher prevalence of healthy daughters. The rank correlations between estimated breeding values ranged from 0.96 to 0.99 when predicted using either one of the 2 cohorts and from 0.94 to 0.99 when predicted using either a threshold or a linear model. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Wavelet methodology to improve single unit isolation in primary motor cortex cells
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.
2016-01-01
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461
NASA Technical Reports Server (NTRS)
Iversen, J. D.; White, B. R.; Pollack, J. B.; Greeley, R.
1976-01-01
Results are reported for wind-tunnel experiments performed to determine the threshold friction speed of particles with different densities. Experimentally determined threshold speeds are plotted as a function of particle diameter and in terms of threshold parameter vs particle friction Reynolds number. The curves are compared with those of previous experiments, and an A-B curve is plotted to show differences in threshold speed due to differences in size distributions and particle shapes. Effects of particle diameter are investigated, an expression for threshold speed is derived by considering the equilibrium forces acting on a single particle, and other approximately valid expressions are evaluated. It is shown that the assumption of universality of the A-B curve is in error at very low pressures for small particles and that only predictions which take account of both Reynolds number and effects of interparticle forces yield reasonable agreement with experimental data. Effects of nonerodible surface roughness are examined, and threshold speeds computed with allowance for this factor are compared with experimental values. Threshold friction speeds on Mars are then estimated for a surface pressure of 5 mbar, taking into account all the factors considered.
Probabilistic clustering of rainfall condition for landslide triggering
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Luciani, Silvia; Cesare Mondini, Alessandro; Kirschbaum, Dalia; Valigi, Daniela; Guzzetti, Fausto
2013-04-01
Landslides are widespread natural and man made phenomena. They are triggered by earthquakes, rapid snow melting, human activities, but mostly by typhoons and intense or prolonged rainfall precipitations. In Italy mostly they are triggered by intense precipitation. The prediction of landslide triggered by rainfall precipitations over large areas is commonly based on the exploitation of empirical models. Empirical landslide rainfall thresholds are used to identify rainfall conditions for the possible landslide initiation. It's common practice to define rainfall thresholds by assuming a power law lower boundary in the rainfall intensity-duration or cumulative rainfall-duration space above which landslide can occur. The boundary is defined considering rainfall conditions associated to landslide phenomena using heuristic approaches, and doesn't consider rainfall events not causing landslides. Here we present a new fully automatic method to identify the probability of landslide occurrence associated to rainfall conditions characterized by measures of intensity or cumulative rainfall and rainfall duration. The method splits the rainfall events of the past in two groups: a group of events causing landslides and its complementary, then estimate their probabilistic distributions. Next, the probabilistic membership of the new event to one of the two clusters is estimated. The method doesn't assume a priori any threshold model, but simple exploits the real empirical distribution of rainfall events. The approach was applied in the Umbria region, Central Italy, where a catalogue of landslide timing, were obtained through the search of chronicles, blogs and other source of information in the period 2002-2012. The approach was tested using rain gauge measures and satellite rainfall estimates (NASA TRMM-v6), allowing in both cases the identification of the rainfall condition triggering landslides in the region. Compared to the other existing threshold definition methods, the prosed one (i) largely reduces the subjectivity in the choice of the threshold model and in how it is calculated, and (ii) it can be easier set-up in other study areas. The proposed approach can be conveniently integrated in existing early-warning system to improve the accuracy of the estimation of the real landslide occurrence probability associated to rainfall events and its uncertainty.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Injury tolerance and moment response of the knee joint to combined valgus bending and shear loading.
Bose, Dipan; Bhalla, Kavi S; Untaroiu, Costin D; Ivarsson, B Johan; Crandall, Jeff R; Hurwitz, Shepard
2008-06-01
Valgus bending and shearing of the knee have been identified as primary mechanisms of injuries in a lateral loading environment applicable to pedestrian-car collisions. Previous studies have reported on the structural response of the knee joint to pure valgus bending and lateral shearing, as well as the estimated injury thresholds for the knee bending angle and shear displacement based on experimental tests. However, epidemiological studies indicate that most knee injuries are due to the combined effects of bending and shear loading. Therefore, characterization of knee stiffness for combined loading and the associated injury tolerances is necessary for developing vehicle countermeasures to mitigate pedestrian injuries. Isolated knee joint specimens (n=40) from postmortem human subjects were tested in valgus bending at a loading rate representative of a pedestrian-car impact. The effect of lateral shear force combined with the bending moment on the stiffness response and the injury tolerances of the knee was concurrently evaluated. In addition to the knee moment-angle response, the bending angle and shear displacement corresponding to the first instance of primary ligament failure were determined in each test. The failure displacements were subsequently used to estimate an injury threshold function based on a simplified analytical model of the knee. The validity of the determined injury threshold function was subsequently verified using a finite element model. Post-test necropsy of the knees indicated medial collateral ligament injury consistent with the clinical injuries observed in pedestrian victims. The moment-angle response in valgus bending was determined at quasistatic and dynamic loading rates and compared to previously published test data. The peak bending moment values scaled to an average adult male showed no significant change with variation in the superimposed shear load. An injury threshold function for the knee in terms of bending angle and shear displacement was determined by performing regression analysis on the experimental data. The threshold values of the bending angle (16.2 deg) and shear displacement (25.2 mm) estimated from the injury threshold function were in agreement with previously published knee injury threshold data. The continuous knee injury function expressed in terms of bending angle and shear displacement enabled injury prediction for combined loading conditions such as those observed in pedestrian-car collisions.
Spatial distribution of threshold wind speeds for dust outbreaks in northeast Asia
NASA Astrophysics Data System (ADS)
Kimura, Reiji; Shinoda, Masato
2010-01-01
Asian windblown dust events cause human and animal health effects and agricultural damage in dust source areas such as China and Mongolia and cause "yellow sand" events in Japan and Korea. It is desirable to develop an early warning system to help prevent such damage. We used our observations at a Mongolian station together with data from previous studies to model the spatial distribution of threshold wind speeds for dust events in northeast Asia (35°-45°N and 100°-115°E). Using a map of Normalized Difference Vegetation Index (NDVI), we estimated spatial distributions of vegetation cover, roughness length, threshold friction velocity, and threshold wind speed. We also recognized a relationship between NDVI in the dust season and maximum NDVI in the previous year. Thus, it may be possible to predict the threshold wind speed in the next dust season using the maximum NDVI in the previous year.
Vukovic, N; Radovanovic, J; Milanovic, V; Boiko, D L
2016-11-14
We have obtained a closed-form expression for the threshold of Risken-Nummedal-Graham-Haken (RNGH) multimode instability in a Fabry-Pérot (FP) cavity quantum cascade laser (QCL). This simple analytical expression is a versatile tool that can easily be applied in practical situations which require analysis of QCL dynamic behavior and estimation of its RNGH multimode instability threshold. Our model for a FP cavity laser accounts for the carrier coherence grating and carrier population grating as well as their relaxation due to carrier diffusion. In the model, the RNGH instability threshold is analyzed using a second-order bi-orthogonal perturbation theory and we confirm our analytical solution by a comparison with the numerical simulations. In particular, the model predicts a low RNGH instability threshold in QCLs. This agrees very well with experimental data available in the literature.
Use of LiDAR to define habitat thresholds for forest bird conservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.
Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.
Use of LiDAR to define habitat thresholds for forest bird conservation
Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.; ...
2017-09-01
Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.
Higgs boson gluon–fusion production at threshold in N 3LO QCD
Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...
2014-09-02
We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3LO) in perturbative QCD. Furthermore, we present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions.
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Glenn, Nancy F.; Neuenschwander, Amy; Vierling, Lee A.; Spaete, Lucas; Li, Aihua; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan
2016-01-01
To estimate the potential synergies of OLI and ICESat-2 we used simulated ICESat-2 photon data to predict vegetation structure. In a shrubland environment with a vegetation mean height of 1 m and mean vegetation cover of 33%, vegetation photons are able to explain nearly 50% of the variance in vegetation height. These results, and those from a comparison site, suggest that a lower detection threshold of ICESat-2 may be in the range of 30% canopy cover and roughly 1 m height in comparable dryland environments and these detection thresholds could be used to combine future ICESat-2 photon data with OLI spectral data for improved vegetation structure. Overall, the synergistic use of Landsat 8 and ICESat-2 may improve estimates of above-ground biomass and carbon storage in drylands that meet these minimum thresholds, increasing our ability to monitor drylands for fuel loading and the potential to sequester carbon.
Johnson, A P; Macgowan, R J; Eldridge, G D; Morrow, K M; Sosman, J; Zack, B; Margolis, A
2013-10-01
The objectives of this study were to: (a) estimate the costs of providing a single-session HIV prevention intervention and a multi-session intervention, and (b) estimate the number of HIV transmissions that would need to be prevented for the intervention to be cost-saving or cost-effective (threshold analysis). Project START was evaluated with 522 young men aged 18-29 years released from eight prisons located in California, Mississippi, Rhode Island, and Wisconsin. Cost data were collected prospectively. Costs per participant were $689 for the single-session comparison intervention, and ranged from $1,823 to 1,836 for the Project START multi-session intervention. From the incremental threshold analysis, the multi-session intervention would be cost-effective if it prevented one HIV transmission for every 753 participants compared to the single-session intervention. Costs are comparable with other HIV prevention programs. Program managers can use these data to gauge costs of initiating these HIV prevention programs in correctional facilities.
Fast simulation of packet loss rates in a shared buffer communications switch
NASA Technical Reports Server (NTRS)
Chang, Cheng-Shang; Heidelberger, Philip; Shahabuddin, Perwez
1993-01-01
This paper describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
Uribe-Leitz, Tarsicio; Esquivel, Micaela M; Molina, George; Lipsitz, Stuart R; Verguet, Stéphane; Rose, John; Bickler, Stephen W; Gawande, Atul A; Haynes, Alex B; Weiser, Thomas G
2015-09-01
We previously identified a range of 4344-5028 annual operations per 100,000 people to be related to desirable health outcomes. From this and other evidence, the Lancet Commission on Global Surgery recommends a minimum rate of 5000 operations per 100,000 people. We evaluate rates of growth and estimate the time it will take to reach this minimum surgical rate threshold. We aggregated country-level surgical rate estimates from 2004 to 2012 into the twenty-one Global Burden of Disease (GBD) regions. We calculated mean rates of surgery proportional to population size for each year and assessed the rate of growth over time. We then extrapolated the time it will take each region to reach a surgical rate of 5000 operations per 100,000 population based on linear rates of change. All but two regions experienced growth in their surgical rates during the past 8 years. Fourteen regions did not meet the recommended threshold in 2012. If surgical capacity continues to grow at current rates, seven regions will not meet the threshold by 2035. Eastern Sub-Saharan Africa will not reach the recommended threshold until 2124. The rates of growth in surgical service delivery are exceedingly variable. At current rates of surgical and population growth, 6.2 billion people (73% of the world's population) will be living in countries below the minimum recommended rate of surgical care in 2035. A strategy for strengthening surgical capacity is essential if these targets are to be met in a timely fashion as part of the integrated health system development.
Relationships between rainfall and Combined Sewer Overflow (CSO) occurrences
NASA Astrophysics Data System (ADS)
Mailhot, A.; Talbot, G.; Lavallée, B.
2015-04-01
Combined Sewer Overflow (CSO) has been recognized as a major environmental issue in many countries. In Canada, the proposed reinforcement of the CSO frequency regulations will result in new constraints on municipal development. Municipalities will have to demonstrate that new developments do not increase CSO frequency above a reference level based on historical CSO records. Governmental agencies will also have to define a framework to assess the impact of new developments on CSO frequency and the efficiency of the various proposed measures to maintain CSO frequency at its historic level. In such a context, it is important to correctly assess the average number of days with CSO and to define relationships between CSO frequency and rainfall characteristics. This paper investigates such relationships using available CSO and rainfall datasets for Quebec. CSO records for 4285 overflow structures (OS) were analyzed. A simple model based on rainfall thresholds was developed to forecast the occurrence of CSO on a given day based on daily rainfall values. The estimated probability of days with CSO have been used to estimate the rainfall threshold value at each OS by imposing that the probability of exceeding this rainfall value for a given day be equal to the estimated probability of days with CSO. The forecast skill of this model was assessed for 3437 OS using contingency tables. The statistical significance of the forecast skill could be assessed for 64.2% of these OS. The threshold model has demonstrated significant forecast skill for 91.3% of these OS confirming that for most OS a simple threshold model can be used to assess the occurrence of CSO.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2010-01-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise. Copyright 2011 Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.
Rosen, Sophia; Davidov, Ori
2012-07-20
Multivariate outcomes are often measured longitudinally. For example, in hearing loss studies, hearing thresholds for each subject are measured repeatedly over time at several frequencies. Thus, each patient is associated with a multivariate longitudinal outcome. The multivariate mixed-effects model is a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, it is known that hearing thresholds, at every frequency, increase with age. Moreover, this age-related threshold elevation is monotone in frequency, that is, the higher the frequency, the higher, on average, is the rate of threshold elevation. This means that there is a natural ordering among the different frequencies in the rate of hearing loss. In practice, this amounts to imposing a set of constraints on the different frequencies' regression coefficients modeling the mean effect of time and age at entry to the study on hearing thresholds. The aforementioned constraints should be accounted for in the analysis. The result is a multivariate longitudinal model with restricted parameters. We propose estimation and testing procedures for such models. We show that ignoring the constraints may lead to misleading inferences regarding the direction and the magnitude of various effects. Moreover, simulations show that incorporating the constraints substantially improves the mean squared error of the estimates and the power of the tests. We used this methodology to analyze a real hearing loss study. Copyright © 2012 John Wiley & Sons, Ltd.
Goldwasser, Deborah L
2017-03-15
The National Lung Screening Trial (NLST) demonstrated that non-small cell lung cancer (NSCLC) mortality can be reduced by a program of annual CT screening in high-risk individuals. However, CT screening regimens and adherence vary, potentially impacting the lung cancer mortality benefit. We defined the NSCLC cure threshold as the maximum tumor size at which a given NSCLC would be curable due to early detection. We obtained data from 518,234 NSCLCs documented in the U.S. SEER cancer registry between 1988 and 2012 and 1769 NSCLCs detected in the NLST. We demonstrated mathematically that the distribution function governing the cure threshold for the most aggressive NSCLCs, G(x|Φ = 1), was embedded in the probability function governing detection of SEER-documented NSCLCs. We determined the resulting probability functions governing detection over a range of G(x|Φ = 1) scenarios and compared them with their expected functional forms. We constructed a simulation framework to determine the cure threshold models most consistent with tumor sizes and outcomes documented in SEER and the NLST. Whereas the median tumor size for lethal NSCLCs documented in SEER is 43 mm (males) and 40 mm (females), a simulation model in which the median cure threshold for the most aggressive NSCLCs is 10 mm (males) and 15 mm (females) best fit the SEER and NLST data. The majority of NSCLCs in the NLST were treated at sizes greater than our median cure threshold estimates. New technology is needed to better distinguish and treat the most aggressive NSCLCs when they are small (i.e., 5-15 mm). © 2016 UICC.
NASA Astrophysics Data System (ADS)
Borthakur, Tribeni; Sarma, Ranjit
2017-05-01
Top-contact Pentacene-based organic thin film transistors (OTFTs) with a thin layer of Vanadium Pent-oxide between Pentacene and Au layer are fabricated. Here we have found that the devices with V2O5/Au bi-layer source-drain electrode exhibit better field-effect mobility, high on-off ratio, low threshold voltage and low sub-threshold slope than the devices with Au only. The field-effect mobility, current on-off ratio, threshold voltage and sub-threshold slope of V2O5/Au bi-layer OTFT estimated from the device with 15 nm thick V2O5 layer is .77 cm2 v-1 s-1, 7.5×105, -2.9 V and .36 V/decade respectively.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
Development of a precipitation-area curve for warning criteria of short-duration flash flood
NASA Astrophysics Data System (ADS)
Bae, Deg-Hyo; Lee, Moon-Hwan; Moon, Sung-Keun
2018-01-01
This paper presents quantitative criteria for flash flood warning that can be used to rapidly assess flash flood occurrence based on only rainfall estimates. This study was conducted for 200 small mountainous sub-catchments of the Han River basin in South Korea because South Korea has recently suffered many flash flood events. The quantitative criteria are calculated based on flash flood guidance (FFG), which is defined as the depth of rainfall of a given duration required to cause frequent flooding (1-2-year return period) at the outlet of a small stream basin and is estimated using threshold runoff (TR) and antecedent soil moisture conditions in all sub-basins. The soil moisture conditions were estimated during the flooding season, i.e., July, August and September, over 7 years (2002-2009) using the Sejong University Rainfall Runoff (SURR) model. A ROC (receiver operating characteristic) analysis was used to obtain optimum rainfall values and a generalized precipitation-area (P-A) curve was developed for flash flood warning thresholds. The threshold function was derived as a P-A curve because the precipitation threshold with a short duration is more closely related to basin area than any other variables. For a brief description of the P-A curve, generalized thresholds for flash flood warnings can be suggested for rainfall rates of 42, 32 and 20 mm h-1 in sub-basins with areas of 22-40, 40-100 and > 100 km2, respectively. The proposed P-A curve was validated based on observed flash flood events in different sub-basins. Flash flood occurrences were captured for 9 out of 12 events. This result can be used instead of FFG to identify brief flash flood (less than 1 h), and it can provide warning information to decision-makers or citizens that is relatively simple, clear and immediate.
Lovvorn, James R.; De La Cruz, Susan; Takekawa, John Y.; Shaskey, Laura E.; Richman, Samantha E.
2013-01-01
Planning for marine conservation often requires estimates of the amount of habitat needed to support assemblages of interacting species. During winter in subtidal San Pablo Bay, California, the 3 main diving duck species are lesser scaup Aythya affinis (LESC), greater scaup A. marila (GRSC), and surf scoter Melanitta perspicillata (SUSC), which all feed almost entirely on the bivalve Corbula amurensis. Decreased body mass and fat, increased foraging effort, and major departures of these birds appeared to result from food limitation. Broad overlap in prey size, water depth, and location suggested that the 3 species responded similarly to availability of the same prey. However, an energetics model that accounts for differing body size, locomotor mode, and dive behavior indicated that each species will become limited at different stages of prey depletion in the order SUSC, then GRSC, then LESC. Depending on year, 35 to 66% of the energy in Corbula standing stocks was below estimated threshold densities for profitable foraging. Ectothermic predators, especially flounders and sturgeons, could reduce excess carrying capacity for different duck species by 4 to 10%. A substantial quantity of prey above profitability thresholds was not exploited before most ducks left San Pablo Bay. Such pre-depletion departure has been attributed in other taxa to foraging aggression. However, in these diving ducks that showed no overt aggression, this pattern may result from high costs of locating all adequate prey patches, resulting reliance on existing flocks to find food, and propensity to stay near dense flocks to avoid avian predation. For interacting species assemblages, modeling profitability thresholds can indicate the species most vulnerable to food declines. However, estimates of total habitat needed require better understanding of factors affecting the amount of prey above thresholds that is not depleted before the predators move elsewhere.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
Smith, Jennifer L; Sturrock, Hugh J W; Olives, Casey; Solomon, Anthony W; Brooker, Simon J
2013-01-01
Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF), generally collected using the recommended gold-standard cluster randomized surveys (CRS). Integrated Threshold Mapping (ITM) has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters. Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i) the district prevalence of TF; (ii) the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii) the enrollment rate in schools. Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates the use of a simulated approach in addressing operational research questions for trachoma but also other NTDs.
A multi-threshold sampling method for TOF-PET signal processing
NASA Astrophysics Data System (ADS)
Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.
2009-04-01
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.
Lin, Kuang-Wei; Kim, Yohan; Maxwell, Adam D.; Wang, Tzu-Yin; Hall, Timothy L.; Xu, Zhen; Fowlkes, J. Brian; Cain, Charles A.
2014-01-01
Histotripsy produces tissue fractionation through dense energetic bubble clouds generated by short, high-pressure, ultrasound pulses. Conventional histotripsy treatments have used longer pulses from 3 to 10 cycles wherein the lesion-producing bubble cloud generation depends on the pressure-release scattering of very high peak positive shock fronts from previously initiated, sparsely distributed bubbles (the “shock-scattering” mechanism). In our recent work, the peak negative pressure (P−) for generation of dense bubble clouds directly by a single negative half cycle, the “intrinsic threshold,” was measured. In this paper, the dense bubble clouds and resulting lesions (in RBC phantoms and canine tissues) generated by these supra-intrinsic threshold pulses were studied. A 32-element, PZT-8, 500 kHz therapy transducer was used to generate very short (< 2 cycles) histotripsy pulses at a pulse repetition frequency (PRF) of 1 Hz and P− from 24.5 to 80.7 MPa. The results showed that the spatial extent of the histotripsy-induced lesions increased as the applied P− increased, and the sizes of these lesions corresponded well to the estimates of the focal regions above the intrinsic cavitation threshold, at least in the lower pressure regime (P− = 26–35 MPa). The average sizes for the smallest reproducible lesions were approximately 0.9 × 1.7 mm (lateral × axial), significantly smaller than the −6dB beamwidth of the transducer (1.8 × 4.0 mm). These results suggest that, using the intrinsic threshold mechanism, well-confined and microscopic lesions can be precisely generated and their spatial extent can be estimated based on the fraction of the focal region exceeding the intrinsic cavitation threshold. Since the supra-threshold portion of the negative half cycle can be precisely controlled, lesions considerably less than a wavelength are easily produced, hence the term “microtripsy.” PMID:24474132
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
Patient cost-sharing, socioeconomic status, and children's health care utilization.
Nilsson, Anton; Paul, Alexander
2018-05-01
This paper estimates the effect of cost-sharing on the demand for children's and adolescents' use of medical care. We use a large population-wide registry dataset including detailed information on contacts with the health care system as well as family income. Two different estimation strategies are used: regression discontinuity design exploiting age thresholds above which fees are charged, and difference-in-differences models exploiting policy changes. We also estimate combined regression discontinuity difference-in-differences models that take into account discontinuities around age thresholds caused by factors other than cost-sharing. We find that when care is free of charge, individuals increase their number of doctor visits by 5-10%. Effects are similar in middle childhood and adolescence, and are driven by those from low-income families. The differences across income groups cannot be explained by other factors that correlate with income, such as maternal education. Copyright © 2018 Elsevier B.V. All rights reserved.
Evidence of absence (v2.0) software user guide
Dalthorp, Daniel; Huso, Manuela; Dail, David
2017-07-06
Evidence of Absence software (EoA) is a user-friendly software application for estimating bird and bat fatalities at wind farms and for designing search protocols. The software is particularly useful in addressing whether the number of fatalities is below a given threshold and what search parameters are needed to give assurance that thresholds were not exceeded. The software also includes tools (1) for estimating carcass persistence distributions and searcher efficiency parameters ( and ) from field trials, (2) for projecting future mortality based on past monitoring data, and (3) for exploring the potential consequences of various choices in the design of long-term incidental take permits for protected species. The software was designed specifically for cases where tolerance for mortality is low and carcass counts are small or even 0, but the tools also may be used for mortality estimates when carcass counts are large.
Implications of Transaction Costs for Acquisition Program Cost Breaches
2013-06-01
scope of the work, communicating the basis on which the estimate is built, identifying the quality of the data, determining the level of risk, and...projects such as bases, schools, missile storage facilities, maintenance facilities, medical/ dental clinics, libraries, and military family housing...was established as a threshold for measuring cost growth. This prevents a program from rebaselining to avoid a Nunn- McCurdy cost threshold breach. In
On plant detection of intact tomato fruits using image analysis and machine learning methods.
Yamamoto, Kyosuke; Guo, Wei; Yoshioka, Yosuke; Ninomiya, Seishi
2014-07-09
Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.
Numerosity but not texture-density discrimination correlates with math ability in children.
Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C
2016-08-01
Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively sparse, segregatable items (24 dots); numerosity of very dense textured patterns (250 dots); and discrimination of direction of motion. Thresholds in all tasks improved with age, but at different rates, implying the action of different mechanisms: In particular, in young children, thresholds were lower for sparse than textured patterns (the opposite of adults), suggesting earlier maturation of numerosity mechanisms. Importantly, numerosity thresholds for sparse stimuli correlated strongly with math skills, even after controlling for the influence of age, gender and nonverbal IQ. However, neither motion-direction discrimination nor numerosity discrimination of texture patterns showed a significant correlation with math abilities. These results provide further evidence that numerosity and texture-density are perceived by independent neural mechanisms, which develop at different rates; and importantly, only numerosity mechanisms are related to math. As developmental dyscalculia is characterized by a profound deficit in discriminating numerosity, it is fundamental to understand the mechanism behind the discrimination. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A
2015-10-01
A key component for spray decision-making in IPM programmes is the establishment of economic injury levels (EILs) and economic thresholds (ETs). We aimed to establish an EIL for the redlegged earth mite (Halotydeus destructor Tucker) on canola. Complex interactions between mite numbers, feeding damage and plant recovery were found, highlighting the challenges in linking H. destructor numbers to yield. A guide of 10 mites plant(-1) was established at the first-true-leaf stage; however, simple relationships were not evident at other crop development stages, making it difficult to establish reliable EILs based on mite number. Yield was, however, strongly associated with plant damage and plant densities, reflecting the impact of mite feeding damage and indicating a plant-based alternative for establishing thresholds for H. destructor. Drawing on data from multiple field trials, we show that plant densities below 30-40 plants m(-2) could be used as a proxy for mite damage when reliable estimates of mite densities are not possible. This plant-based threshold provides a practical tool that avoids the difficulties of accurately estimating mite densities. The approach may be applicable to other situations where production conditions are unpredictable and interactions between pests and plant hosts are complex. © 2015 Society of Chemical Industry.
Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick
2017-04-28
Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.
Non-invasive indices for the estimation of the anaerobic threshold of oarsmen.
Erdogan, A; Cetin, C; Karatosun, H; Baydar, M L
2010-01-01
This study compared four common non-invasive indices with an invasive index for determining the anaerobic threshold (AT) in 22 adult male rowers using a Concept2 rowing ergometer. A criterion-standard progressive incremental test (invasive method) measured blood lactate concentrations to determine the 4 mmol/l threshold (La4-AT) and Dmax AT (Dm-AT). This was compared with three indices obtained by analysis of respiratory gases and one that was based on the heart rate (HR) deflection point (HRDP) all of which used the Conconi test (non-invasive methods). In the Conconi test, the HRDP was determined whilst continuously increasing the power output (PO) by 25 W/min and measuring respiratory gases and HR. The La4-AT and Dm-AT values differed slightly with respect to oxygen uptake, PO and HR however, AT values significantly correlated with each other and with the four non-invasive methods. In conclusion, the non-invasive indices were comparable with the invasive index and could, therefore, be used in the assessment of AT during rowing ergometer use. In this population of elite rowers, Conconi threshold (Con-AT), based on the measurement of HRDP tended to be the most adequate way of estimating AT for training regulation purposes.
Kohli, Preeti; Storck, Kristina A.; Schlosser, Rodney J.
2016-01-01
Differences in testing modalities and cut-points used to define olfactory dysfunction contribute to the wide variability in estimating the prevalence of olfactory dysfunction in chronic rhinosinusitis (CRS). The aim of this study is to report the prevalence of olfactory impairment using each component of the Sniffin’ Sticks test (threshold, discrimination, identification, and total score) with age-adjusted and ideal cut-points from normative populations. Patients meeting diagnostic criteria for CRS were enrolled from rhinology clinics at a tertiary academic center. Olfaction was assessed using the Sniffin’ Sticks test. The study population consisted of 110 patients. The prevalence of normosmia, hyposmia, and anosmia using total Sniffin’ Sticks score was 41.8%, 20.0%, and 38.2% using age-appropriate cut-points and 20.9%, 40.9%, and 38.2% using ideal cut-points. Olfactory impairment estimates for each dimension mirrored these findings, with threshold yielding the highest values. Threshold, discrimination, and identification were also found to be significantly correlated to each other (P < 0.001). In addition, computed tomography scores, asthma, allergy, and diabetes were found to be associated with olfactory dysfunction. In conclusion, the prevalence of olfactory dysfunction is dependent upon olfactory dimension and if age-adjusted cut-points are used. The method of olfactory testing should be chosen based upon specific clinical and research goals. PMID:27469973
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
Li, Jing; Blakeley, Daniel; Smith?, Robert J.
2011-01-01
The basic reproductive ratio, R 0, is one of the fundamental concepts in mathematical biology. It is a threshold parameter, intended to quantify the spread of disease by estimating the average number of secondary infections in a wholly susceptible population, giving an indication of the invasion strength of an epidemic: if R 0 < 1, the disease dies out, whereas if R 0 > 1, the disease persists. R 0 has been widely used as a measure of disease strength to estimate the effectiveness of control measures and to form the backbone of disease-management policy. However, in almost every aspect that matters, R 0 is flawed. Diseases can persist with R 0 < 1, while diseases with R 0 > 1 can die out. We show that the same model of malaria gives many different values of R 0, depending on the method used, with the sole common property that they have a threshold at 1. We also survey estimated values of R 0 for a variety of diseases, and examine some of the alternatives that have been proposed. If R 0 is to be used, it must be accompanied by caveats about the method of calculation, underlying model assumptions and evidence that it is actually a threshold. Otherwise, the concept is meaningless. PMID:21860658
O'Mahony, James F; Coughlan, Diarmuid
2016-01-01
Ireland is one of the few countries worldwide to have an explicit cost-effectiveness threshold. In 2012, an agreement between government and the pharmaceutical industry that provided substantial savings on existing medications set the threshold at €45,000/quality-adjusted life-year (QALY). This replaced a previously unofficial threshold of €20,000/QALY. According to the agreement, drugs within the threshold will be granted reimbursement, whereas those exceeding it may still be approved following further negotiation. A number of drugs far exceeding the threshold have been approved recently. The agreement only applies to pharmaceuticals. There are four reasons for concern regarding Ireland's threshold. The absence of an explicit threshold for non-drug interventions leaves it unclear if there is parity in willingness to pay across all interventions. As the threshold resembles a price floor rather than a ceiling, in principle it only offers a weak barrier to cost-ineffective interventions. It has no empirical basis. Finally, it is probably too high given recent estimates of a threshold for the UK based on the cost effectiveness of services forgone of approximately £13,000/QALY. An excessive threshold risks causing the Irish health system unintended harm. The lack of an empirically informed threshold means the policy recommendations of cost-effectiveness analysis cannot be considered as fully evidence- based rational rationing. Policy makers should consider these issues and recent Irish legislation that defined cost effectiveness in terms of the opportunity cost of services forgone when choosing what threshold to apply once the current industry agreement expires at the end of 2015
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
NASA Astrophysics Data System (ADS)
Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.
2014-09-01
Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g. 1000 year surge), exceptional surges (outliers induced by exceptional climatic events) had shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea levels data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the Peaks-Over-Threshold method with HI (POTH) and the Block Maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical Historical Maxima (HMax) data, and Over a Threshold Supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme surge frequency estimation.
NASA Astrophysics Data System (ADS)
Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.
2015-07-01
Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear-plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g., 1000-year surge), exceptional surges (outliers induced by exceptional climatic events) have shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme-surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea level data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the peaks-over-threshold method with HI (POTH) and the block maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical historical maxima (HMax) data, and over-a-threshold supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme-surge frequency estimation.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.
2014-01-01
Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237–245; http://dx.doi.org/10.1289/ehp.1408202 PMID:25376053
Forutan, M; Ansari Mahyari, S; Sargolzaei, M
2015-02-01
Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.
Dual-Process Theory and Signal-Detection Theory of Recognition Memory
ERIC Educational Resources Information Center
Wixted, John T.
2007-01-01
Two influential models of recognition memory, the unequal-variance signal-detection model and a dual-process threshold/detection model, accurately describe the receiver operating characteristic, but only the latter model can provide estimates of recollection and familiarity. Such estimates often accord with those provided by the remember-know…
Turbidity-controlled sampling for suspended sediment load estimation
Jack Lewis
2003-01-01
Abstract - Automated data collection is essential to effectively measure suspended sediment loads in storm events, particularly in small basins. Continuous turbidity measurements can be used, along with discharge, in an automated system that makes real-time sampling decisions to facilitate sediment load estimation. The Turbidity Threshold Sampling method distributes...
Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas
Asquith, William H.; Roussel, Meghan C.
2003-01-01
A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.
Control of growth of juvenile leaves of Eucalyptus globulus: effects of leaf age.
Metcalfe, J C; Davies, W J; Pereira, J S
1991-12-01
Biophysical variables influencing the expansion of plant cells (yield threshold, cell wall extensibility and turgor) were measured in individual Eucalyptus globulus leaves from the time of emergence until cessation of growth. Leaf water relations variables and growth rates were determined as relative humidity was changed on an hourly basis. Yield threshold and cell wall extensibility were estimated from plots of leaf growth rate versus turgor. Cell wall extensibility was also measured by the Instron technique, and yield threshold was determined experimentally both by stress relaxation in a psychrometer chamber and by incubation in a range of polyethylene glycol solutions. Once emerging leaves reached approximately 5 cm(2) in size, increases in leaf area were rapid throughout the expansive phase and varied little between light and dark periods. Both leaf growth rate and turgor were sensitive to changes in humidity, and in the longer term, both yield threshold and cell wall extensibility changed as the leaf aged. Rapidly expanding leaves had a very low yield threshold and high cell wall extensibility, whereas mature leaves had low cell wall extensibility. Yield threshold increased with leaf age.
Wavelet methodology to improve single unit isolation in primary motor cortex cells.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2015-05-15
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
NASA Astrophysics Data System (ADS)
Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.
2012-03-01
Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.
NASA Astrophysics Data System (ADS)
Svobodová, Eva; Trnka, Miroslav; Kopp, Radovan; Mareš, Jan; Dubrovský, Martin; Spurný, Petr; Žalud, Zděněk
2015-04-01
Freshwater fish production is significantly correlated with water temperature which is expected to increase under the climate change. This study is dealing with the estimation of the change of water temperature in productive ponds and its impact on the fishery in the Czech Republic. Calculation of surface-water temperature which was based on three-day mean of the air temperature was developed and tested in several ponds in three main fish production areas. Output of surface-water temperature model was compared with measured data and showed that the lower range of model accuracy is surface-water temperature 3°C, under this temperature threshold the model loses its predictive competence. In the expecting of surface-water temperature above the temperature 3°C the model has proved the well consistence between observed and modelled surface-water temperature (R 0.79 - 0.96). Verified model was applied in the conditions of climate change determined by the pattern scaling method, in which standardised scenarios were derived from five global circulation models MPEH5, CSMK3, IPCM4, GFCM21 and HADGEM. Results were evaluated with regard to thresholds which characterise the fish species requirements on water temperature. Used thresholds involved the upper temperature threshold for fish survival and the tolerable number of days in continual period with mentioned threshold surface-water temperature. Target fish species were Common carp (Cyprinus carpio), Maraene whitefish (Coregonus maraena), Northern whitefish (Coregonus peled) and Rainbow trout (Oncorhynchus mykis). Results indicated the limitation of the Czech fish-farming in terms of i) the increase of the length of continual periods with surface-water temperature above the threshold appropriate to given fish species toleration, ii) the increase of the number of continual periods with surface-water temperature above the threshold, both appropriate to given fish species toleration, and iii) the increase of overall number of days within the continual period with temperature above the threshold tolerated by given fish species. ACKNOWLEDGEMENTS: This study was funded by project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Wind scatterometry with improved ambiguity selection and rain modeling
NASA Astrophysics Data System (ADS)
Draper, David Willis
Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.
Sri Lankan FRAX model and country-specific intervention thresholds.
Lekamwasam, Sarath
2013-01-01
There is a wide variation in fracture probabilities estimated by Asian FRAX models, although the outputs of South Asian models are concordant. Clinicians can choose either fixed or age-specific intervention thresholds when making treatment decisions in postmenopausal women. Cost-effectiveness of such approach, however, needs to be addressed. This study examined suitable fracture probability intervention thresholds (ITs) for Sri Lanka, based on the Sri Lankan FRAX model. Fracture probabilities were estimated using all Asian FRAX models for a postmenopausal woman of BMI 25 kg/m² and has no clinical risk factors apart from a fragility fracture, and they were compared. Age-specific ITs were estimated based on the Sri Lankan FRAX model using the method followed by the National Osteoporosis Guideline Group in the UK. Using the age-specific ITs as the reference standard, suitable fixed ITs were also estimated. Fracture probabilities estimated by different Asian FRAX models varied widely. Japanese and Taiwan models showed higher fracture probabilities while Chinese, Philippine, and Indonesian models gave lower fracture probabilities. Output of remaining FRAX models were generally similar. Age-specific ITs of major osteoporotic fracture probabilities (MOFP) based on the Sri Lankan FRAX model varied from 2.6 to 18% between 50 and 90 years. ITs of hip fracture probabilities (HFP) varied from 0.4 to 6.5% between 50 and 90 years. In finding fixed ITs, MOFP of 11% and HFP of 3.5% gave the lowest misclassification and highest agreement. Sri Lankan FRAX model behaves similar to other Asian FRAX models such as Indian, Singapore-Indian, Thai, and South Korean. Clinicians may use either the fixed or age-specific ITs in making therapeutic decisions in postmenopausal women. The economical aspects of such decisions, however, need to be considered.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
2008-08-01
a sample with clutter of mean level y0 and noise of variance σ 2, with a threshold CACA zt β= . Using the results presented in [15, 16, 23], it can...level y0 and noise of variance σ 2, with a threshold CACA zt β= . Using (3.107) and (3.98), the expression for the expected Pd of a Swerling 2 target can
Maximum Langmuir Fields in Planetary Foreshocks Determined from the Electrostatic Decay Threshold
NASA Technical Reports Server (NTRS)
Robinson, P. A.; Cairns, Iver H.
1995-01-01
Maximum electric fields of Langmuir waves at planetary foreshocks are estimated from the threshold for electrostatic decay, assuming it saturates beam driven growth, and incorporating heliospheric variation of plasma density and temperature. Comparisons with spacecraft observations yields good quantitative agreement. Observations in type 3 radio sources are also in accord with this interpretation. A single mechanism can thus account for the highest fields of beam driven waves in both contexts.
Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann
2013-01-01
Background There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2–5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS–Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2–5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4 percentage (CD4%) <25%. Methods and Findings ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm3 (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1–6.5) (no ART) to 2.1% (95% CI: 1.3%–3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%–3.5%) and 2.2% (95% CI: 1.4%–3.5%) after 3 y, respectively. The analysis was limited by loss to follow-up and the unavailability of WHO staging data. Conclusions The results indicate no mortality difference for up to 3 y between ART initiation irrespective of CD4 value and ART initiation at a threshold of CD4 count <750 cells/mm3 or CD4% <25%, but there are overall higher point estimates for mortality when ART is initiated at lower CD4 values. Please see later in the article for the Editors' Summary PMID:24260029
QUEST+: A general multidimensional Bayesian adaptive psychometric method.
Watson, Andrew B
2017-03-01
QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
Volume estimation of brain abnormalities in MRI data
NASA Astrophysics Data System (ADS)
Suprijadi, Pratama, S. H.; Haryanto, F.
2014-02-01
The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.
Jones, Timothy A; Lee, Choongheon; Gaines, G Christopher; Grant, J W Wally
2015-04-01
Vestibular macular sensors are activated by a shearing motion between the otoconial membrane and underlying receptor epithelium. Shearing motion and sensory activation in response to an externally induced head motion do not occur instantaneously. The mechanically reactive elastic and inertial properties of the intervening tissue introduce temporal constraints on the transfer of the stimulus to sensors. Treating the otoconial sensory apparatus as an overdamped second-order mechanical system, we measured the governing long time constant (Τ(L)) for stimulus transfer from the head surface to epithelium. This provided the basis to estimate the corresponding upper cutoff for the frequency response curve for mouse otoconial organs. A velocity step excitation was used as the forcing function. Hypothetically, the onset of the mechanical response to a step excitation follows an exponential rise having the form Vel(shear) = U(1-e(-t/TL)), where U is the applied shearing velocity step amplitude. The response time of the otoconial apparatus was estimated based on the activation threshold of macular neural responses to step stimuli having durations between 0.1 and 2.0 ms. Twenty adult C57BL/6 J mice were evaluated. Animals were anesthetized. The head was secured to a shaker platform using a non-invasive head clip or implanted skull screws. The shaker was driven to produce a theoretical forcing step velocity excitation at the otoconial organ. Vestibular sensory evoked potentials (VsEPs) were recorded to measure the threshold for macular neural activation. The duration of the applied step motion was reduced systematically from 2 to 0.1 ms and response threshold determined for each duration (nine durations). Hypothetically, the threshold of activation will increase according to the decrease in velocity transfer occurring at shorter step durations. The relationship between neural threshold and stimulus step duration was characterized. Activation threshold increased exponentially as velocity step duration decreased below 1.0 ms. The time constants associated with the exponential curve were Τ(L) = 0.50 ms for the head clip coupling and T(L) = 0.79 ms for skull screw preparation. These corresponded to upper -3 dB frequency cutoff points of approximately 318 and 201 Hz, respectively. T(L) ranged from 224 to 379 across individual animals using the head clip coupling. The findings were consistent with a second-order mass-spring mechanical system. Threshold data were also fitted to underdamped models post hoc. The underdamped fits suggested natural resonance frequencies on the order of 278 to 448 Hz as well as the idea that macular systems in mammals are less damped than generally acknowledged. Although estimated indirectly, it is argued that these time constants reflect largely if not entirely the mechanics of transfer to the sensory apparatus. The estimated governing time constant of 0.50 ms for composite data predicts high frequency cutoffs of at least 318 Hz for the intact otoconial apparatus of the mouse.
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).
NASA Astrophysics Data System (ADS)
Mahmoudian, A.; Scales, W. A.; Bernhardt, P. A.; Fu, H.; Briczinski, S. J.; McCarrick, M. J.
2013-11-01
Stimulated Electromagnetic Emissions (SEEs), secondary electromagnetic waves excited by high power electromagnetic waves transmitted into the ionosphere, produced by the Magnetized Stimulated Brillouin Scatter (MSBS) process are investigated. Data from four recent research campaigns at the High Frequency Active Auroral Research Program (HAARP) facility is presented in this work. These experiments have provided additional quantitative interpretation of the SEE spectrum produced by MSBS to yield diagnostic measurements of the electron temperature and ion composition in the heated ionosphere. SEE spectral emission lines corresponding to ion acoustic (IA) and electrostatic ion cyclotron (EIC) mode excitation were observed with a shift in frequency up to a few tens of Hz from the pump frequency for heating near the third harmonic of the electron gyrofrequency 3fce. The threshold of each emission line has been measured by changing the pump wave power. The excitation threshold of IA and EIC emission lines originating at the reflection and upper hybrid altitudes is measured for various beam angles relative to the magnetic field. Variation of strength of MSBS emission lines with pump frequency relative to 3fce and 4fce is also studied. A full wave solution has been used to estimate the amplitude of the electric field at the interaction altitude. The estimated instability threshold using the theoretical model is compared with the threshold of MSBS lines in the experiment and possible diagnostic information for the background ionospheric plasma is discussed. Simultaneous formation of artificial field-aligned irregularities (FAIs) and suppression of the MSBS process is investigated. This technique can be used to estimate the growth time of artificial FAIs which may result in determination of plasma waves and physical process involved in the formation of FAIs.
Perceptual precision of passive body tilt is consistent with statistically optimal cue integration
Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.
2017-01-01
When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477
Nikooie, Roohollah; Gharakhanlo, Reza; Rajabi, Hamid; Bahraminegad, Morteza; Ghafari, Ali
2009-10-01
The purpose of this study was to determine the validity of noninvasive anaerobic threshold (AT) estimation using %SpO2 (arterial oxyhemoglobin saturation) changes and respiratory gas exchanges. Fifteen active, healthy males performed 2 graded exercise tests on a motor-driven treadmill in 2 separated sessions. Respiratory gas exchanges and heart rate (HR), lactate concentration, and %SpO2 were measured continuously throughout the test. Anaerobic threshold was determined based on blood lactate concentration (lactate-AT), %SpO2 changes (%SpO2-AT), respiratory exchange ratio (RER-AT), V-slope method (V-slope-AT), and ventilatory equivalent for O2 (EqO2-AT). Blood lactate measuring was considered as gold standard assessment of AT and was applied to confirm the validity of other noninvasive methods. The mean O2 corresponding to lactate-AT, %SpO2-AT, RER-AT, V-slope -AT, and EqO2-AT were 2176.6 +/- 206.4, 1909.5 +/- 221.4, 2141.2 +/- 245.6, 1933.7 +/- 216.4, and 1975 +/- 232.4, respectively. Intraclass correlation coefficient (ICC) analysis indicates a significant correlation between 4 noninvasive methods and the criterion method. Blond-Altman plots showed the good agreement between O2 corresponding to AT in each method and lactate-AT (95% confidence interval (CI). Our results indicate that a noninvasive and easy procedure of monitoring the %SpO2 is a valid method for estimation of AT. Also, in the present study, the respiratory exchange ratio (RER) method seemed to be the best respiratory index for noninvasive estimation of anaerobic threshold, and the heart rate corresponding to AT predicted by this method can be used by coaches and athletes to define training zones.
Guo, Xiasheng; Li, Qian; Zhang, Zhe; Zhang, Dong; Tu, Juan
2013-08-01
The inertial cavitation (IC) activity of ultrasound contrast agents (UCAs) plays an important role in the development and improvement of ultrasound diagnostic and therapeutic applications. However, various diagnostic and therapeutic applications have different requirements for IC characteristics. Here through IC dose quantifications based on passive cavitation detection, IC thresholds were measured for two commercialized UCAs, albumin-shelled KangRun(®) and lipid-shelled SonoVue(®) microbubbles, at varied UCA volume concentrations (viz., 0.125 and 0.25 vol. %) and acoustic pulse lengths (viz., 5, 10, 20, 50, and 100 cycles). Shell elastic and viscous coefficients of UCAs were estimated by fitting measured acoustic attenuation spectra with Sarkar's model. The influences of sonication condition (viz., acoustic pulse length) and UCA shell properties on IC threshold were discussed based on numerical simulations. Both experimental measurements and numerical simulations indicate that IC thresholds of UCAs decrease with increasing UCA volume concentration and acoustic pulse length. The shell interfacial tension and dilatational viscosity estimated for SonoVue (0.7 ± 0.11 N/m, 6.5 ± 1.01 × 10(-8) kg/s) are smaller than those of KangRun (1.05 ± 0.18 N/m, 1.66 ± 0.38 × 10(-7) kg/s); this might result in lower IC threshold for SonoVue. The current results will be helpful for selecting and utilizing commercialized UCAs for specific clinical applications, while minimizing undesired IC-induced bioeffects.
DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg
2016-06-01
Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.
The impact of climate change on ozone-related mortality in Sydney.
Physick, William; Cope, Martin; Lee, Sunhee
2014-01-13
Coupled global, regional and chemical transport models are now being used with relative-risk functions to determine the impact of climate change on human health. Studies have been carried out for global and regional scales, and in our paper we examine the impact of climate change on ozone-related mortality at the local scale across an urban metropolis (Sydney, Australia). Using three coupled models, with a grid spacing of 3 km for the chemical transport model (CTM), and a mortality relative risk function of 1.0006 per 1 ppb increase in daily maximum 1-hour ozone concentration, we evaluated the change in ozone concentrations and mortality between decades 1996-2005 and 2051-2060. The global model was run with the A2 emissions scenario. As there is currently uncertainty regarding a threshold concentration below which ozone does not impact on mortality, we calculated mortality estimates for the three daily maximum 1-hr ozone concentration thresholds of 0, 25 and 40 ppb. The mortality increase for 2051-2060 ranges from 2.3% for a 0 ppb threshold to 27.3% for a 40 ppb threshold, although the numerical increases differ little. Our modeling approach is able to identify the variation in ozone-related mortality changes at a suburban scale, estimating that climate change could lead to an additional 55 to 65 deaths across Sydney in the decade 2051-2060. Interestingly, the largest increases do not correspond spatially to the largest ozone increases or the densest population centres. The distribution pattern of changes does not seem to vary with threshold value, while the magnitude only varies slightly.
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
The Role of Parametric Assumptions in Adaptive Bayesian Estimation
ERIC Educational Resources Information Center
Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.
2004-01-01
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…
Numerosity but Not Texture-Density Discrimination Correlates with Math Ability in Children
ERIC Educational Resources Information Center
Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C.
2016-01-01
Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively…
Gerhardsson, Lars; Balogh, Istvan; Hambert, Per-Arne; Hjortsberg, Ulf; Karlsson, Jan-Erik
2005-01-01
The aim of the present study was to compare the development of vibration white fingers (VWF) in workers in relation to different ways of exposure estimation, and their relationship to the standard ISO 5349, annex A. Nineteen vibration exposed (grinding machines) male workers completed a questionnaire followed by a structured interview including questions regarding their estimated hand-held vibration exposure. Neurophysiological tests such as fractionated nerve conduction velocity in hands and arms, vibrotactile perception thresholds and temperature thresholds were determined. The subjective estimation of the mean daily exposure-time to vibrating tools was 192 min (range 18-480 min) among the workers. The estimated mean exposure time calculated from the consumption of grinding wheels was 42 min (range 18-60 min), approximately a four-fold overestimation (Wilcoxon's signed ranks test, p<0.001). Thus, objective measurements of the exposure time, related to the standard ISO 5349, which in this case were based on the consumption of grinding wheels, will in most cases give a better basis for adequate risk assessment than self-exposure assessment.
Guo, J; Booth, M; Jenkins, J; Wang, H; Tanner, M
1998-12-01
The World Bank Loan Project for schistosomiasis in China commenced field activities in 1992. In this paper, we describe disease control strategies for levels of different endemicity, and estimate unit costs and total expenditure of screening, treatment (cattle and humans) and snail control for 8 provinces where Schistosoma japonicum infection is endemic. Overall, we estimate that more than 21 million US dollars were spent on field activities during the first three years of the project. Mollusciciding (43% of the total expenditure) and screening (28% of the total) are estimated to have the most expensive field activities. However, despite the expense of screening, a simple model predicts that selective chemotherapy could have been cheaper than mass chemotherapy in areas where infection prevalence was higher than 15%, which was the threshold for mass chemotherapy intervention. It is concluded that considerable cost savings could be made in the future by narrowing the scope of snail control activities, redefining the threshold infection prevalence for mass chemotherapy, defining smaller administrative units, and developing rapid assessment tools.
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Hashizume, Katsumi; Ito, Toshiko; Igarashi, Shinya
2017-03-01
A stable isotope dilution assay (SIDA) for two taste-active pyroglutamyl decapeptide ethyl esters (PGDPE1; (pGlu)LFGPNVNPWCOOC 2 H 5 , PGDPE2; (pGlu)LFNPSTNPWCOOC 2 H 5 ) in sake was developed using deuterated isotopes and high-resolution mass spectrometry. Recognition thresholds of PGDPEs in sake were estimated as 3.8 μg/L for PGDPE1 and 8.1 μg/L for PGDPE2, evaluated using 11 student panelists aged in their twenties. Quantitated concentrations in 18 commercial sake samples ranged from 0 to 27 μg/L for PGDPE1 and from 0 to 202 μg/L for PGDPE2. The maximum levels of PGDPE1 and PGDPE2 in the sake samples were approximately 8 and 25 times higher than the estimated recognition thresholds, respectively. The results indicated that PGDPEs may play significant sensory roles in the sake. The level of PGDPEs in unpasteurized sake samples decreased during storage for 50 days at 6 °C, suggesting PGDPEs may be enzymatically decomposed.
Trace, Sara E; Thornton, Laura M; Root, Tammy L; Mazzeo, Suzanne E; Lichtenstein, Paul; Pedersen, Nancy L; Bulik, Cynthia M
2012-05-01
We assessed the impact of reducing the binge eating frequency and duration thresholds on the diagnostic criteria for bulimia nervosa (BN) and binge eating disorder (BED). We estimated the lifetime population prevalence of BN and BED in 13,295 female twins from the Swedish Twin study of Adults: Genes and Environment employing a range of frequency and duration thresholds. External validation (risk to cotwin) was used to investigate empirical evidence for an optimal binge eating frequency threshold. The lifetime prevalence estimates of BN and BED increased linearly as the frequency criterion decreased. As the required duration increased, the prevalence of BED decreased slightly. Discontinuity in cotwin risk was observed in BN between at least four times per month and at least five times per month. This model could not be fit for BED. The proposed changes to the DSM-5 binge eating frequency and duration criteria would allow for better detection of binge eating pathology without resulting in a markedly higher lifetime prevalence of BN or BED. Copyright © 2011 Wiley Periodicals, Inc.
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Estimation of risks by chemicals produced during laser pyrolysis of tissues
NASA Astrophysics Data System (ADS)
Weber, Lothar W.; Spleiss, Martin
1995-01-01
Use of laser systems in minimal invasive surgery results in formation of laser aerosol with volatile organic compounds of possible health risk. By use of currently identified chemical substances an overview on possibly associated risks to human health is given. The class of the different identified alkylnitriles seem to be a laser specific toxicological problem. Other groups of chemicals belong to the Maillard reaction type, the fatty acid pyrolysis type, or even the thermally activated chemolysis. In relation to the available different threshold limit values the possible exposure ranges of identified substances are discussed. A rough estimation results in an exposure range of less than 1/100 for almost all substances with given human threshold limit values without regard of possible interactions. For most identified alkylnitriles, alkenes, and heterocycles no threshold limit values are given for lack of, until now, practical purposes. Pyrolysis of anaesthetized organs with isoflurane gave no hints for additional pyrolysis products by fragment interactions with resulting VOCs. Measurements of pyrolysis gases resulted in detection of small amounts of NO additionally with NO2 formation at plasma status.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Perceptual color difference metric including a CSF based on the perception threshold
NASA Astrophysics Data System (ADS)
Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine
2008-01-01
The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.
Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc
2016-04-01
In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.
NASA Astrophysics Data System (ADS)
Shi, Zhao; Wei, Fangqiang; Chandrasekar, Venkatachalam
2018-03-01
Both Ms 8.0 Wenchuan earthquake on 12 May 2008 and Ms 7.0 Lushan earthquake on 20 April 2013 occurred in the province of Sichuan, China. In the earthquake-affected mountainous area, a large amount of loose material caused a high occurrence of debris flow during the rainy season. In order to evaluate the rainfall intensity-duration (I-D) threshold of the debris flow in the earthquake-affected area, and to fill up the observational gaps caused by the relatively scarce and low-altitude deployment of rain gauges in this area, raw data from two S-band China New Generation Doppler Weather Radar (CINRAD) were captured for six rainfall events that triggered 519 debris flows between 2012 and 2014. Due to the challenges of radar quantitative precipitation estimation (QPE) over mountainous areas, a series of improvement measures are considered: a hybrid scan mode, a vertical reflectivity profile (VPR) correction, a mosaic of reflectivity, a merged rainfall-reflectivity (R - Z) relationship for convective and stratiform rainfall, and rainfall bias adjustment with Kalman filter (KF). For validating rainfall accumulation over complex terrains, the study areas are divided into two kinds of regions by the height threshold of 1.5 km from the ground. Three kinds of radar rainfall estimates are compared with rain gauge measurements. It is observed that the normalized mean bias (NMB) is decreased by 39 % and the fitted linear ratio between radar and rain gauge observation reaches at 0.98. Furthermore, the radar-based I-D threshold derived by the frequentist method is I = 10.1D-0.52 and is underestimated by uncorrected raw radar data. In order to verify the impacts on observations due to spatial variation, I-D thresholds are identified from the nearest rain gauge observations and radar observations at the rain gauge locations. It is found that both kinds of observations have similar I-D thresholds and likewise underestimate I-D thresholds due to undershooting at the core of convective rainfall. It is indicated that improvement of spatial resolution and measuring accuracy of radar observation will lead to the improvement of identifying debris flow occurrence, especially for events triggered by the strong small-scale rainfall process in the study area.
NASA Astrophysics Data System (ADS)
Chefranov, Sergey; Chefranov, Alexander
2016-04-01
Linear hydrodynamic stability theory for the Hagen-Poiseuille (HP) flow yields a conclusion of infinitely large threshold Reynolds number, Re, value. This contradiction to the observation data is bypassed using assumption of the HP flow instability having hard type and possible for sufficiently high-amplitude disturbances. HP flow disturbance evolution is considered by nonlinear hydrodynamic stability theory. Similar is the case of the plane Couette (PC) flow. For the plane Poiseuille (PP) flow, linear theory just quantitatively does not agree with experimental data defining the threshold Reynolds number Re= 5772 ( S. A. Orszag, 1971), more than five-fold exceeding however the value observed, Re=1080 (S. J. Davies, C. M. White, 1928). In the present work, we show that the linear stability theory conclusions for the HP and PC on stability for any Reynolds number and evidently too high threshold Reynolds number estimate for the PP flow are related with the traditional use of the disturbance representation assuming the possibility of separation of the longitudinal (along the flow direction) variable from the other spatial variables. We show that if to refuse from this traditional form, conclusions on the linear instability for the HP and PC flows may be obtained for finite Reynolds numbers (for the HP flow, for Re>704, and for the PC flow, for Re>139). Also, we fit the linear stability theory conclusion on the PP flow to the experimental data by getting an estimate of the minimal threshold Reynolds number as Re=1040. We also get agreement of the minimal threshold Reynolds number estimate for PC with the experimental data of S. Bottin, et.al., 1997, where the laminar PC flow stability threshold is Re = 150. Rogue waves excitation mechanism in oppositely directed currents due to the PC flow linear instability is discussed. Results of the new linear hydrodynamic stability theory for the HP, PP, and PC flows are published in the following papers: 1. S.G. Chefranov, A.G. Chefranov, JETP, v.119, No.2, 331, 2014 2. S.G. Chefranov, A.G. Chefranov, Doklady Physics, vol.60, No.7, 327-332, 2015 3. S.G. Chefranov, A. G. Chefranov, arXiv: 1509.08910v1 [physics.flu-dyn] 29 Sep 2015 (accepted to JETP)
Evidence for the contribution of a threshold retrieval process to semantic memory.
Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A
2017-10-01
It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.
Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.
Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq
2017-06-01
The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.
Nanosecond laser pulses for mimicking thermal effects on nanostructured tungsten-based materials
NASA Astrophysics Data System (ADS)
Besozzi, E.; Maffini, A.; Dellasega, D.; Russo, V.; Facibeni, A.; Pazzaglia, A.; Beghi, M. G.; Passoni, M.
2018-03-01
In this work, we exploit nanosecond laser irradiation as a compact solution for investigating the thermomechanical behavior of tungsten materials under extreme thermal loads at the laboratory scale. Heat flux factor thresholds for various thermal effects, such as melting, cracking and recrystallization, are determined under both single and multishot experiments. The use of nanosecond lasers for mimicking thermal effects induced on W by fusion-relevant thermal loads is thus validated by direct comparison of the thresholds obtained in this work and the ones reported in the literature for electron beams and millisecond laser irradiation. Numerical simulations of temperature and thermal stress performed on a 2D thermomechanical code are used to predict the heat flux factor thresholds of the different thermal effects. We also investigate the thermal effect thresholds of various nanostructured W coatings. These coatings are produced by pulsed laser deposition, mimicking W coatings in tokamaks and W redeposited layers. All the coatings show lower damage thresholds with respect to bulk W. In general, thresholds decrease as the porosity degree of the materials increases. We thus propose a model to predict these thresholds for coatings with various morphologies, simply based on their porosity degree, which can be directly estimated by measuring the variation of the coating mass density with respect to that of the bulk.
Krumm, Bianca; Klump, Georg; Köppl, Christine; Langemann, Ulrike
2017-09-27
We measured the auditory sensitivity of the barn owl ( Tyto alba ), using a behavioural Go/NoGo paradigm in two different age groups, one younger than 2 years ( n = 4) and another more than 13 years of age ( n = 3). In addition, we obtained thresholds from one individual aged 23 years, three times during its lifetime. For computing audiograms, we presented test frequencies of between 0.5 and 12 kHz, covering the hearing range of the barn owl. Average thresholds in quiet were below 0 dB sound pressure level (SPL) for frequencies between 1 and 10 kHz. The lowest mean threshold was -12.6 dB SPL at 8 kHz. Thresholds were the highest at 12 kHz, with a mean of 31.7 dB SPL. Test frequency had a significant effect on auditory threshold but age group had no significant effect. There was no significant interaction between age group and test frequency. Repeated threshold estimates over 21 years from a single individual showed only a slight increase in thresholds. We discuss the auditory sensitivity of barn owls with respect to other species and suggest that birds, which generally show a remarkable capacity for regeneration of hair cells in the basilar papilla, are naturally protected from presbycusis. © 2017 The Author(s).
Koka, Kanthaiah; Saoji, Aniket A; Attias, Joseph; Litvak, Leonid M
2017-01-01
Although, cochlear implants (CI) traditionally have been used to treat individuals with bilateral profound sensorineural hearing loss, a recent trend is to implant individuals with residual low-frequency hearing. Notably, many of these individuals demonstrate an air-bone gap (ABG) in low-frequency, pure-tone thresholds following implantation. An ABG is the difference between audiometric thresholds measured using air conduction (AC) and bone conduction (BC) stimulation. Although, behavioral AC thresholds are straightforward to assess, BC thresholds can be difficult to measure in individuals with severe-to-profound hearing loss because of vibrotactile responses to high-level, low-frequency stimulation and the potential contribution of hearing in the contralateral ear. Because of these technical barriers to measuring behavioral BC thresholds in implanted patients with residual hearing, it would be helpful to have an objective method for determining ABG. This study evaluated an innovative technique for measuring electrocochleographic (ECochG) responses using the cochlear microphonic (CM) response to assess AC and BC thresholds in implanted patients with residual hearing. Results showed high correlations between CM thresholds and behavioral audiograms for AC and BC conditions, thereby demonstrating the feasibility of using ECochG as an objective tool for quantifying ABG in CI recipients.
On the mechanism of pulsed laser ablation of phthalocyanine nanoparticles in an aqueous medium
NASA Astrophysics Data System (ADS)
Kogan, Boris; Malimonenko, Nicholas; Butenin, Alexander; Novoseletsky, Nicholas; Chizhikov, Sergei
2018-06-01
Laser ablation of phthalocyanine nanoparticles has potential for cancer treatment. The ablation is accompanied by the formation of microbubbles and the sublimation of nanoparticles. This was investigated in a liquid medium simulating tissue using optical-acoustic and spectral-luminescent methods. The thresholds for the appearance of microbubbles have been determined as a function of nanoparticle size. For the minimal size particles (80 nm) this threshold is equal to about 20–25 mJ cm‑2 and for the maximal size particles (230 nm) this threshold is equal to about 7 mJ cm‑2. It was estimated that the particle temperature at which bubbles arise is near 145 °С.
Improving ontology matching with propagation strategy and user feedback
NASA Astrophysics Data System (ADS)
Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu
2015-07-01
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
Threshold Velocity for Saltation Activity in the Taklimakan Desert
NASA Astrophysics Data System (ADS)
Yang, Xinghua; He, Qing; Matimin, Ali; Yang, Fan; Huo, Wen; Liu, Xinchun; Zhao, Tianliang; Shen, Shuanghe
2017-12-01
The threshold velocity is an indicator of a soil's susceptibility to saltation activity and is also an important parameter in dust emission models. In this study, the saltation activity, atmospheric conditions, and soil conditions were measured from 1 August 2008 to 31 July 2009 in the Taklimakan Desert, China. the threshold velocity was estimated using the Gaussian time fraction equivalence method. At 2 m height, the 1-min averaged threshold velocity varied between 3.5 and 10.9 m/s, with a mean of 5.9 m/s. Threshold velocities varying between 4.5 and 7.5 m/s accounted for about 91.4% of all measurements. The average threshold velocity displayed clear seasonal variations in the following sequence: winter (5.1 m/s) < autumn (5.8 m/s) < spring (6.1 m/s) < summer (6.5 m/s). A regression equation of threshold velocity was established based on the relations between daily mean threshold velocity and air temperature, specific humidity, and soil volumetric moisture content. High or moderate positive correlations were found between threshold velocity and air temperature, specific humidity, and soil volumetric moisture content (air temperature r = 0.75; specific humidity r = 0.59; and soil volumetric moisture content r = 0.55; sample size = 251). In the study area, the observed horizontal dust flux was 4198.0 kg/m during the whole period of observation, while the horizontal dust flux calculated using the threshold velocity from the regression equation was 4675.6 kg/m. The correlation coefficient between the calculated result and the observations was 0.91. These results indicate that atmospheric and soil conditions should not be neglected in parameterization schemes for threshold velocity.
Morphological analysis of pore size and connectivity in a thick mixed-culture biofilm.
Rosenthal, Alex F; Griffin, James S; Wagner, Michael; Packman, Aaron I; Balogun, Oluwaseyi; Wells, George F
2018-05-19
Morphological parameters are commonly used to predict transport and metabolic kinetics in biofilms. Yet, quantification of biofilm morphology remains challenging due to imaging technology limitations and lack of robust analytical approaches. We present a novel set of imaging and image analysis techniques to estimate internal porosity, pore size distributions, and pore network connectivity to a depth of 1 mm at a resolution of 10 µm in a biofilm exhibiting both heterotrophic and nitrifying activity. Optical coherence tomography (OCT) scans revealed an extensive pore network with diameters as large as 110 µm directly connected to the biofilm surface and surrounding fluid. Thin section fluorescence in situ hybridization microscopy revealed ammonia oxidizing bacteria (AOB) distributed through the entire thickness of the biofilm. AOB were particularly concentrated in the biofilm around internal pores. Areal porosity values estimated from OCT scans were consistently lower than those estimated from multiphoton laser scanning microscopy, though the two imaging modalities showed a statistically significant correlation (r = 0.49, p<0.0001). Estimates of areal porosity were moderately sensitive to grey level threshold selection, though several automated thresholding algorithms yielded similar values to those obtained by manually thresholding performed by a panel of environmental engineering researchers (±25% relative error). These findings advance our ability to quantitatively describe the geometry of biofilm internal pore networks at length scales relevant to engineered biofilm reactors and suggest that internal pore structures provide crucial habitat for nitrifier growth. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Chaotic ion motion in magnetosonic plasma waves
NASA Technical Reports Server (NTRS)
Varvoglis, H.
1984-01-01
The motion of test ions in a magnetosonic plasma wave is considered, and the 'stochasticity threshold' of the wave's amplitude for the onset of chaotic motion is estimated. It is shown that for wave amplitudes above the stochasticity threshold, the evolution of an ion distribution can be described by a diffusion equation with a diffusion coefficient D approximately equal to 1/v. Possible applications of this process to ion acceleration in flares and ion beam thermalization are discussed.
NASA Astrophysics Data System (ADS)
Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.
2011-10-01
In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.
Fast Vessel Detection in Gaofen-3 SAR Images with Ultrafine Strip-Map Mode
Liu, Lei; Qiu, Xiaolan; Lei, Bin
2017-01-01
This study aims to detect vessels with lengths ranging from about 70 to 300 m, in Gaofen-3 (GF-3) SAR images with ultrafine strip-map (UFS) mode as fast as possible. Based on the analysis of the characteristics of vessels in GF-3 SAR imagery, an effective vessel detection method is proposed in this paper. Firstly, the iterative constant false alarm rate (CFAR) method is employed to detect the potential ship pixels. Secondly, the mean-shift operation is applied on each potential ship pixel to identify the candidate target region. During the mean-shift process, we maintain a selection matrix recording which pixels can be taken, and these pixels are called as the valid points of the candidate target. The l1 norm regression is used to extract the principal axis and detect the valid points. Finally, two kinds of false alarms, the bright line and the azimuth ambiguity, are removed by comparing the valid area of the candidate target with a pre-defined value and computing the displacement between the true target and the corresponding replicas respectively. Experimental results on three GF-3 SAR images with UFS mode demonstrate the effectiveness and efficiency of the proposed method. PMID:28678197
Antunes, R; Kvadsheim, P H; Lam, F P A; Tyack, P L; Thomas, L; Wensveen, P J; Miller, P J O
2014-06-15
The potential effects of exposing marine mammals to military sonar is a current concern. Dose-response relationships are useful for predicting potential environmental impacts of specific operations. To reveal behavioral response thresholds of exposure to sonar, we conducted 18 exposure/control approaches to 6 long-finned pilot whales. Source level and proximity of sonar transmitting one of two frequency bands (1-2 kHz and 6-7 kHz) were increased during exposure sessions. The 2-dimensional movement tracks were analyzed using a changepoint method to identify the avoidance response thresholds which were used to estimate dose-response relationships. No support for an effect of sonar frequency or previous exposures on the probability of response was found. Estimated response thresholds at which 50% of population show avoidance (SPLmax=170 dB re 1 μPa, SELcum=173 dB re 1 μPa(2) s) were higher than previously found for other cetaceans. The US Navy currently uses a generic dose-response relationship to predict the responses of cetaceans to naval active sonar, which has been found to underestimate behavioural impacts on killer whales and beaked whales. The navy curve appears to match more closely our results with long-finned pilot whales, though it might underestimate the probability of avoidance for pilot-whales at long distances from sonar sources. Copyright © 2014 Elsevier Ltd. All rights reserved.
Local health care expenditure plans and their opportunity costs.
Karlsberg Schaffer, Sarah; Sussex, Jon; Devlin, Nancy; Walker, Andrew
2015-09-01
In the UK, approval decisions by Health Technology Assessment bodies are made using a cost per quality-adjusted life year (QALY) threshold, the value of which is based on little empirical evidence. We test the feasibility of estimating the "true" value of the threshold in NHS Scotland using information on marginal services (those planned to receive significant (dis)investment). We also explore how the NHS makes spending decisions and the role of cost per QALY evidence in this process. We identify marginal services using NHS Board-level responses to the 2012/13 Budget Scrutiny issued by the Scottish Government, supplemented with information on prioritisation processes derived from interviews with Finance Directors. We search the literature for cost-effectiveness evidence relating to marginal services. The cost-effectiveness estimates of marginal services vary hugely and thus it was not possible to obtain a reliable estimate of the threshold. This is unsurprising given the finding that cost-effectiveness evidence is rarely used to justify expenditure plans, which are driven by a range of other factors. Our results highlight the differences in objectives between HTA bodies and local health service decision makers. We also demonstrate that, even if it were desirable, the use of cost-effectiveness evidence at local level would be highly challenging without extensive investment in health economics resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland
2014-02-01
Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1) m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Karczmarski, Leszek; Huang, Shiang-Lin; Chan, Stephen C Y
2017-02-23
Defining demographic and ecological threshold of population persistence can assist in informing conservation management. We undertook such analyses for the Indo-Pacific humpback dolphin (Sousa chinensis) in the Pearl River Delta (PRD) region, southeast China. We use adult survival estimates for assessments of population status and annual rate of change. Our estimates indicate that, given a stationary population structure and minimal risk scenario, ~2000 individuals (minimum viable population in carrying capacity, MVP k ) can maintain the population persistence across 40 generations. However, under the current population trend (~2.5% decline/annum), the population is fast approaching its viability threshold and may soon face effects of demographic stochasticity. The population demographic trajectory and the minimum area of critical habitat (MACH) that could prevent stochastic extinction are both highly sensitive to fluctuations in adult survival. For a hypothetical stationary population, MACH should approximate 3000-km 2 . However, this estimate increases four-fold with a 5% increase of adult mortality and exceeds the size of PRD when calculated for the current population status. On the other hand, cumulatively all current MPAs within PRD fail to secure the minimum habitat requirement to accommodate sufficiently viable population size. Our findings indicate that the PRD population is deemed to become extinct unless effective conservation measures can rapidly reverse the current population trend.
Levine, Michael; Stellpflug, Sam; Pizon, Anthony F; Traub, Stephen; Vohra, Rais; Wiegand, Timothy; Traub, Nicole; Tashman, David; Desai, Shoma; Chang, Jamie; Nathwani, Dhruv; Thomas, Stephen
2017-07-01
Acetaminophen toxicity is common in clinical practice. In recent years, several European countries have lowered the treatment threshold, which has resulted in increased number of patients being treated at a questionable clinical benefit. The primary objective of this study is to estimate the cost and associated burden to the United States (U.S.) healthcare system, if such a change were adopted in the U.S. This study is a retrospective review of all patients age 14 years or older who were admitted to one of eight different hospitals located throughout the U.S. with acetaminophen exposures during a five and a half year span, encompassing from 1 January 2008 to 30 June 2013. Those patients who would be treated with the revised nomogram, but not the current nomogram were included. The cost of such treatment was extrapolated to a national level. 139 subjects were identified who would be treated with the revised nomogram, but not the current nomogram. Extrapolating these numbers nationally, an additional 4507 (95%CI 3641-8751) Americans would be treated annually for acetaminophen toxicity. The cost of lowering the treatment threshold is estimated to be $45 million (95%CI 36,400,000-87,500,000) annually. Adopting the revised treatment threshold in the U.S. would result in a significant cost, yet provide an unclear clinical benefit.
Clayton, Hilary M.
2015-01-01
The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Bhargava, P.; Biswas, A. K.; Sahu, Shasikiran; Mandloi, V.; Ittoop, M. O.; Khattak, B. Q.; Tiwari, M. K.; Kukreja, L. M.
2013-03-01
It is shown that the threshold fluence for laser paint stripping can be accurately estimated from the heat of gasification and the absorption coefficient of the epoxy-paint. The threshold fluence determined experimentally by stripping of the epoxy-paint on a substrate using a TEA CO2 laser matches closely with the calculated value. The calculated threshold fluence and the measured absorption coefficient of the paint allowed us to determine the epoxy paint thickness that would be removed per pulse at a given laser fluence even without experimental trials. This was used to predict the optimum scan speed required to strip the epoxy-paint of a given thickness using a high average power TEA CO2 laser. Energy Dispersive X-Ray Fluorescence (EDXRF) studies were also carried out on laser paint-stripped concrete substrate to show high efficacy of this modality.
Kakinuma, Kaoru; Sasaki, Takehiro; Jamsran, Undarmaa; Okuro, Toshiya; Takeuchi, Kazuhiko
2014-10-01
Applying the threshold concept to rangeland management is an important challenge in semi-arid and arid regions. Threshold recognition and prediction is necessary to enable local pastoralists to prevent the occurrence of an undesirable state that would result from unsustainable grazing pressure, but this requires a better understanding of the pastoralists' perception of vegetation threshold changes. We estimated plant species cover in survey plots along grazing gradients in steppe and desert-steppe areas of Mongolia. We also conducted interviews with local pastoralists and asked them to evaluate whether the plots were suitable for grazing. Floristic composition changed nonlinearly along the grazing gradient in both the desert-steppe and steppe areas. Pastoralists observed the floristic composition changes along the grazing gradients, but their evaluations of grazing suitability did not always decrease along the grazing gradients, both of which included areas in a post-threshold state. These results indicated that local pastoralists and scientists may have different perceptions of vegetation states, even though both of groups used plant species and coverage as indicators in their evaluations. Therefore, in future studies of rangeland management, researchers and pastoralists should exchange their knowledge and perceptions to successfully apply the threshold concept to rangeland management.
A robust threshold-based cloud mask for the HRV channel of MSG SEVIRI
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-03-01
A robust threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the METEOSAT SEVIRI instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures which cannot be detected by the low resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behaviour for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test dataset depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as estimate of cloud fraction.
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.
2013-01-01
The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained evidence that once the intelligence threshold is met, personality factors become more predictive for creativity. On the contrary, no threshold was found for creative achievement, i.e. creative achievement benefits from higher intelligence even at fairly high levels of intellectual ability. PMID:23825884
Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use.
Culyer, Anthony J
2016-10-01
There is misunderstanding about both the meaning and the role of cost-effectiveness thresholds in policy decision making. This article dissects the main issues by use of a bookshelf metaphor. Its main conclusions are as follows: it must be possible to compare interventions in terms of their impact on a common measure of health; mere effectiveness is not a persuasive case for inclusion in public insurance plans; public health advocates need to address issues of relative effectiveness; a 'first best' benchmark or threshold ratio of health gain to expenditure identifies the least effective intervention that should be included in a public insurance plan; the reciprocal of this ratio - the 'first best' cost-effectiveness threshold - will rise or fall as the health budget rises or falls (ceteris paribus); setting thresholds too high or too low costs lives; failure to set any cost-effectiveness threshold at all also involves avertable deaths and morbidity; the threshold cannot be set independently of the health budget; the threshold can be approached from either the demand side or the supply side - the two are equivalent only in a health-maximising equilibrium; the supply-side approach generates an estimate of a 'second best' cost-effectiveness threshold that is higher than the 'first best'; the second best threshold is the one generally to be preferred in decisions about adding or subtracting interventions in an established public insurance package; multiple thresholds are implied by systems having distinct and separable health budgets; disinvestment involves eliminating effective technologies from the insured bundle; differential weighting of beneficiaries' health gains may affect the threshold; anonymity and identity are factors that may affect the interpretation of the threshold; the true opportunity cost of health care in a community, where the effectiveness of interventions is determined by their impact on health, is not to be measured in money - but in health itself.
Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K
2013-09-01
Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.
Diffusion amid random overlapping obstacles: Similarities, invariants, approximations
Novak, Igor L.; Gao, Fei; Kraikivski, Pavel; Slepchenko, Boris M.
2011-01-01
Efficient and accurate numerical techniques are used to examine similarities of effective diffusion in a void between random overlapping obstacles: essential invariance of effective diffusion coefficients (Deff) with respect to obstacle shapes and applicability of a two-parameter power law over nearly entire range of excluded volume fractions (ϕ), except for a small vicinity of a percolation threshold. It is shown that while neither of the properties is exact, deviations from them are remarkably small. This allows for quick estimation of void percolation thresholds and approximate reconstruction of Deff (ϕ) for obstacles of any given shape. In 3D, the similarities of effective diffusion yield a simple multiplication “rule” that provides a fast means of estimating Deff for a mixture of overlapping obstacles of different shapes with comparable sizes. PMID:21513372
Non-linear effects of soda taxes on consumption and weight outcomes.
Fletcher, Jason M; Frisvold, David E; Tefft, Nathan
2015-05-01
The potential health impacts of imposing large taxes on soda to improve population health have been of interest for over a decade. As estimates of the effects of existing soda taxes with low rates suggest little health improvements, recent proposals suggest that large taxes may be effective in reducing weight because of non-linear consumption responses or threshold effects. This paper tests this hypothesis in two ways. First, we estimate non-linear effects of taxes using the range of current rates. Second, we leverage the sudden, relatively large soda tax increase in two states during the early 1990s combined with new synthetic control methods useful for comparative case studies. Our findings suggest virtually no evidence of non-linear or threshold effects. Copyright © 2014 John Wiley & Sons, Ltd.
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Soil texture analysis revisited: Removal of organic matter matters more than ever
Schjønning, Per; Watts, Christopher W.; Christensen, Bent T.; Munkholm, Lars J.
2017-01-01
Exact estimates of soil clay (<2 μm) and silt (2–20 μm) contents are crucial as these size fractions impact key soil functions, and as pedotransfer concepts based on clay and silt contents are becoming increasingly abundant. We examined the effect of removing soil organic matter (SOM) by H2O2 before soil dispersion and determination of clay and silt. Soil samples with gradients in SOM were retrieved from three long-term field experiments each with uniform soil mineralogy and texture. For soils with less than 2 g C 100 g-1 minerals, clay estimates were little affected by SOM. Above this threshold, underestimation of clay increased dramatically with increasing SOM content. Silt contents were systematically overestimated when SOM was not removed; no lower SOM threshold was found for silt, but the overestimation was more pronounced for finer textured soils. When exact estimates of soil particles <20 μm are needed, SOM should always be removed before soil dispersion. PMID:28542416
Soil texture analysis revisited: Removal of organic matter matters more than ever.
Jensen, Johannes Lund; Schjønning, Per; Watts, Christopher W; Christensen, Bent T; Munkholm, Lars J
2017-01-01
Exact estimates of soil clay (<2 μm) and silt (2-20 μm) contents are crucial as these size fractions impact key soil functions, and as pedotransfer concepts based on clay and silt contents are becoming increasingly abundant. We examined the effect of removing soil organic matter (SOM) by H2O2 before soil dispersion and determination of clay and silt. Soil samples with gradients in SOM were retrieved from three long-term field experiments each with uniform soil mineralogy and texture. For soils with less than 2 g C 100 g-1 minerals, clay estimates were little affected by SOM. Above this threshold, underestimation of clay increased dramatically with increasing SOM content. Silt contents were systematically overestimated when SOM was not removed; no lower SOM threshold was found for silt, but the overestimation was more pronounced for finer textured soils. When exact estimates of soil particles <20 μm are needed, SOM should always be removed before soil dispersion.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
The development of rating of perceived exertion-based tests of physical working capacity.
Mielke, Michelle; Housh, Terry J; Malek, Moh H; Beck, Travis W; Schmidt, Richard J; Johnson, Glen O
2008-01-01
The purpose of the present study was to use ratings of perceived exertion (RPE) from the Borg (6-20) and OMNI-Leg (0-10) scales to determine the Physical Working Capacity at the Borg and OMNI thresholds (PWC(BORG) and PWC(OMNI)). PWC(BORG) and PWC(OMNI) were compared with other fatigue thresholds determined from the measurement of heart rate (the Physical Working Capacity at the Heart Rate Threshold: PWC(HRT)), and oxygen consumption (the Physical Working Capacity at the Oxygen Consumption Threshold, PWC(VO2)), as well as the ventilatory threshold (VT). Fifteen men and women volunteers (mean age +/- SD = 22 +/- 1 years) performed an incremental test to exhaustion on an electronically braked ergometer for the determination of VO2 peak and VT. The subjects also performed 4 randomly ordered workbouts to exhaustion at different power outputs (ranging from 60 to 206W) for the determination of PWC(BORG), PWC(OMNI), PWC(HRT), and PWC(VO2). The results indicated that there were no significant mean differences among the fatigue thresholds: PWC(BORG) (mean +/- SD = 133 +/- 37W; 67 +/- 8% of VO2 peak), PWC(OMNI) (137 +/- 44W; 68 +/- 9% of VO2 peak), PWC(HRT) (135 +/- 36W; 68 +/- 8% of VO2 peak), PWC(VO2) (145 +/- 41W; 72 +/- 7% of VO2 peak) and VT (131 +/- 45W; 66 +/- 8% of VO2 peak). The results of this study indicated that the mathematical model used to estimate PWC(HRT) and PWC(VO2) can be applied to ratings of perceived exertion to determine PWC(BORG) and PWC(OMNI) during cycle ergometry. Salient features of the PWC(BORG) and PWC(OMNI) tests are that they are simple to administer and require the use of only an RPE scale, a stopwatch, and a cycle ergometer. Furthermore, the power outputs at the PWC(BORG) and PWC(OMNI) may be useful to estimate the VT noninvasively and without the need for expired gas analysis.
Strategies for Early Outbreak Detection of Malaria in the Amhara Region of Ethiopia
NASA Astrophysics Data System (ADS)
Nekorchuk, D.; Gebrehiwot, T.; Mihretie, A.; Awoke, W.; Wimberly, M. C.
2017-12-01
Traditional epidemiological approaches to early detection of disease outbreaks are based on relatively straightforward thresholds (e.g. 75th percentile, standard deviations) estimated from historical case data. For diseases with strong seasonality, these can be modified to create separate thresholds for each seasonal time step. However, for disease processes that are non-stationary, more sophisticated techniques are needed to more accurately estimate outbreak threshold values. Early detection for geohealth-related diseases that also have environmental drivers, such as vector-borne diseases, may also benefit from the integration of time-lagged environmental data and disease ecology models into the threshold calculations. The Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) project has been integrating malaria case surveillance with remotely-sensed environmental data for early detection, warning, and forecasting of malaria epidemics in the Amhara region of Ethiopia, and has five years of weekly time series data from 47 woredas (districts). Efforts to reduce the burden of malaria in Ethiopia has been met with some notable success in the past two decades with major reduction in cases and deaths. However, malaria remains a significant public health threat as 60% of the population live in malarious areas, and due to the seasonal and unstable transmission patterns with cyclic outbreaks, protective immunity is generally low which could cause high morbidity and mortality during the epidemics. This study compared several approaches for defining outbreak thresholds and for identifying a potential outbreak based on deviations from these thresholds. We found that model-based approaches that accounted for climate-driven seasonality in malaria transmission were most effective, and that incorporating a trend component improved outbreak detection in areas with active malaria elimination efforts. An advantage of these early detection techniques is that they can detect climate-driven outbreaks as well as outbreaks driven by social factors such as human migration.
Evaluation of a new model of aeolian transport in the presence of vegetation
Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Miller, Mark E.; Vest, Kimberly; Draut, Amy E.
2013-01-01
Aeolian transport is an important characteristic of many arid and semiarid regions worldwide that affects dust emission and ecosystem processes. The purpose of this paper is to evaluate a recent model of aeolian transport in the presence of vegetation. This approach differs from previous models by accounting for how vegetation affects the distribution of shear velocity on the surface rather than merely calculating the average effect of vegetation on surface shear velocity or simply using empirical relationships. Vegetation, soil, and meteorological data at 65 field sites with measurements of horizontal aeolian flux were collected from the Western United States. Measured fluxes were tested against modeled values to evaluate model performance, to obtain a set of optimum model parameters, and to estimate the uncertainty in these parameters. The same field data were used to model horizontal aeolian flux using three other schemes. Our results show that the model can predict horizontal aeolian flux with an approximate relative error of 2.1 and that further empirical corrections can reduce the approximate relative error to 1.0. The level of error is within what would be expected given uncertainties in threshold shear velocity and wind speed at our sites. The model outperforms the alternative schemes both in terms of approximate relative error and the number of sites at which threshold shear velocity was exceeded. These results lend support to an understanding of the physics of aeolian transport in which (1) vegetation's impact on transport is dependent upon the distribution of vegetation rather than merely its average lateral cover and (2) vegetation impacts surface shear stress locally by depressing it in the immediate lee of plants rather than by changing the bulk surface's threshold shear velocity. Our results also suggest that threshold shear velocity is exceeded more than might be estimated by single measurements of threshold shear stress and roughness length commonly associated with vegetated surfaces, highlighting the variation of threshold shear velocity with space and time in real landscapes.
Fleetcroft, Robert; Steel, Nicholas; Cookson, Richard; Howe, Amanda
2008-06-17
The 2003 revision of the UK GMS contract rewards general practices for performance against clinical quality indicators. Practices can exempt patients from treatment, and can receive maximum payment for less than full coverage of eligible patients. This paper aims to estimate the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care (the pay-performance gap), and to estimate how much of the gap is attributable respectively to thresholds and to exception reporting. Analysis of Quality Outcomes Framework data in the National Primary Care Database and exception reporting data from the Information Centre from 8407 practices in England in 2005 - 6. The main outcome measures were the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care at the practice level, both for individual indicators and a combined composite score. An additional outcome was the percentage of that gap attributable respectively to exception reporting and maximum threshold targets set at less than 100%. The mean pay-performance gap for the 65 aggregated clinical indicators was 13.3% (range 2.9% to 48%). 52% of this gap (6.9% of eligible patients) is attributable to thresholds being set at less than 100%, and 48% to patients being exception reported. The gap was greater than 25% in 9 indicators: beta blockers and cholesterol control in heart disease; cholesterol control in stroke; influenza immunization in asthma; blood pressure, sugar and cholesterol control in diabetes; seizures in epilepsy and treatment of hypertension. Threshold targets and exception reporting introduce an incentive ceiling, which substantially reduces the percentage of eligible patients that UK practices need to treat in order to receive maximum incentive payments for delivering that care. There are good clinical reasons for exception reporting, but after unsuitable patients have been exempted from treatment, there is no reason why all maximum thresholds should not be 100%, whilst retaining the current lower thresholds to provide incentives for lower performing practices.
Sugar maple growth in relation to nutrition and stress in the northeastern United States.
Long, Robert P; Horsley, Stephen B; Hallett, Richard A; Bailey, Scott W
2009-09-01
Sugar maple, Acer saccharum, decline disease is incited by multiple disturbance factors when imbalanced calcium (Ca), magnesium (Mg), and manganese (Mn) act as predisposing stressors. Our objective in this study was to determine whether factors affecting sugar maple health also affect growth as estimated by basal area increment (BAI). We used 76 northern hardwood stands in northern Pennsylvania, New York, Vermont, and New Hampshire, USA, and found that sugar maple growth was positively related to foliar concentrations of Ca and Mg and stand level estimates of sugar maple crown health during a high stress period from 1987 to 1996. Foliar nutrient threshold values for Ca, Mg, and Mn were used to analyze long-term BAI trends from 1937 to 1996. Significant (P < or = 0.05) nutrient threshold-by-time interactions indicate changing growth in relation to nutrition during this period. Healthy sugar maples sampled in the 1990s had decreased growth in the 1970s, 10-20 years in advance of the 1980s and 1990s decline episode in Pennsylvania. Even apparently healthy stands that had no defoliation, but had below-threshold amounts of Ca or Mg and above-threshold Mn (from foliage samples taken in the mid 1990s), had decreasing growth by the 1970s. Co-occurring black cherry, Prunus serotina, in a subset of the Pennsylvania and New York stands, showed opposite growth responses with greater growth in stands with below-threshold Ca and Mg compared with above-threshold stands. Sugar maple growing on sites with the highest concentrations of foliar Ca and Mg show a general increase in growth from 1937 to 1996 while other stands with lower Ca and Mg concentrations show a stable or decreasing growth trend. We conclude that acid deposition induced changes in soil nutrient status that crossed a threshold necessary to sustain sugar maple growth during the 1970s on some sites. While nutrition of these elements has not been considered in forest management decisions, our research shows species specific responses to Ca and Mg that may reduce health and growth of sugar maple or change species composition, if not addressed.
Halloran, Stephen
2017-01-01
Objectives Through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP), men and women in England aged between 60 and 74 years are invited for colorectal cancer (CRC) screening every 2 years using the guaiac faecal occult blood test (gFOBT). The aim of this analysis was to estimate the cost–utility of the faecal immunochemical test for haemoglobin (FIT) compared with gFOBT for a cohort beginning screening aged 60 years at a range of FIT positivity thresholds. Design We constructed a cohort-based Markov state transition model of CRC disease progression and screening. Screening uptake, detection, adverse event, mortality and cost data were taken from BCSP data and national sources, including a recent large pilot study of FIT screening in the BCSP. Results Our results suggest that FIT is cost-effective compared with gFOBT at all thresholds, resulting in cost savings and quality-adjusted life years (QALYs) gained over a lifetime time horizon. FIT was cost-saving (p<0.001) and resulted in QALY gains of 0.014 (95% CI 0.012 to 0.017) at the base case threshold of 180 µg Hb/g faeces. Greater health gains and cost savings were achieved as the FIT threshold was decreased due to savings in cancer management costs. However, at lower thresholds, FIT was also associated with more colonoscopies (increasing from 32 additional colonoscopies per 1000 people invited for screening for FIT 180 µg Hb/g faeces to 421 additional colonoscopies per 1000 people invited for screening for FIT 20 µg Hb/g faeces over a 40-year time horizon). Parameter uncertainty had limited impact on the conclusions. Conclusions This is the first published economic analysis of FIT screening in England using data directly comparing FIT with gFOBT in the NHS BSCP. These results for a cohort starting screening aged 60 years suggest that FIT is highly cost-effective at all thresholds considered. Further modelling is needed to estimate economic outcomes for screening across all age cohorts simultaneously. PMID:29079605
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes.
Sales, Marcelo M; Campbell, Carmen Sílvia G; Morais, Pâmella K; Ernesto, Carlos; Soares-Caldeira, Lúcio F; Russo, Paulo; Motta, Daisy F; Moreira, Sérgio R; Nakamura, Fábio Y; Simões, Herbert G
2011-01-12
While several studies have identified the anaerobic threshold (AT) through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV) as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT) and ventilatory threshold (VT) for individuals with type 2 diabetes (T2D) has not been investigated yet. To analyze the possibility of identifying the heart rate variability threshold (HRVT) by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D) and non diabetics individuals (ND). Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1)) and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1)) underwent to an incremental exercise test (IT) on a cycle ergometer. Heart rate (HR), rate of perceived exertion (RPE), blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD) and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1) by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1), respectively. No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87) and ND (r = 0.91 to 0.98) and the Bland & Altman technique confirmed the agreement among them. The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD) were demonstrated to be valid to estimate the LT and VT for both T2D and ND.
Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E
2011-06-01
Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory
NASA Astrophysics Data System (ADS)
Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng
The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.
Hour-glass ceilings: Work-hour thresholds, gendered health inequities.
Dinh, Huong; Strazdins, Lyndall; Welsh, Jennifer
2017-03-01
Long workhours erode health, which the setting of maximum weekly hours aims to avert. This 48-h limit, and the evidence base to support it, has evolved from a workforce that was largely male, whose time in the labour force was enabled by women's domestic work and care giving. The gender composition of the workforce has now changed, and many women (as well as some men) combine care-giving with paid work, a change viewed as fundamental for gender equality. However, it raises questions on the suitability of the work time limit and the extent it is protective of health. We estimate workhour-mental health thresholds, testing if they vary for men and women due to gendered workloads and constraints on and off the job. Using six waves of data from a nationally representative sample of Australian adults (24-65 years), surveyed in the Household Income Labour Dynamics of Australia Survey (N = 3828 men; 4062 women), our study uses a longitudinal, simultaneous equation approach to address endogeneity. Averaging over the sample, we find an overall threshold of 39 h per week beyond which mental health declines. Separate curves then estimate thresholds for men and women, by high or low care and domestic time constraints, using stratified and pooled samples. We find gendered workhour-health limits (43.5 for men, 38 for women) which widen further once differences in resources on and off the job are considered. Only when time is 'unencumbered' and similar time constraints and contexts are assumed, do gender gaps narrow and thresholds approximate the 48-h limit. Our study reveals limits to contemporary workhour regulation which may be systematically disadvantaging women's health. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Impact of Climate Change on Ozone-Related Mortality in Sydney
Physick, William; Cope, Martin; Lee, Sunhee
2014-01-01
Coupled global, regional and chemical transport models are now being used with relative-risk functions to determine the impact of climate change on human health. Studies have been carried out for global and regional scales, and in our paper we examine the impact of climate change on ozone-related mortality at the local scale across an urban metropolis (Sydney, Australia). Using three coupled models, with a grid spacing of 3 km for the chemical transport model (CTM), and a mortality relative risk function of 1.0006 per 1 ppb increase in daily maximum 1-hour ozone concentration, we evaluated the change in ozone concentrations and mortality between decades 1996–2005 and 2051–2060. The global model was run with the A2 emissions scenario. As there is currently uncertainty regarding a threshold concentration below which ozone does not impact on mortality, we calculated mortality estimates for the three daily maximum 1-hr ozone concentration thresholds of 0, 25 and 40 ppb. The mortality increase for 2051–2060 ranges from 2.3% for a 0 ppb threshold to 27.3% for a 40 ppb threshold, although the numerical increases differ little. Our modeling approach is able to identify the variation in ozone-related mortality changes at a suburban scale, estimating that climate change could lead to an additional 55 to 65 deaths across Sydney in the decade 2051–2060. Interestingly, the largest increases do not correspond spatially to the largest ozone increases or the densest population centres. The distribution pattern of changes does not seem to vary with threshold value, while the magnitude only varies slightly. PMID:24419047
NASA Astrophysics Data System (ADS)
Parravicini, Paola; Cislaghi, Matteo; Condemi, Leonardo
2017-04-01
ARPA Lombardia is the Environmental Protection Agency of Lombardy, a wide region in the North of Italy. ARPA is in charge of river monitoring either for Civil Protection or water balance purposes. It cooperates with the Civil Protection Agency of Lombardy (RL-PC) in flood forecasting and early warning. The early warning system is based on rainfall and discharge thresholds: when a threshold exceeding is expected, RL-PC disseminates an alert from yellow to red. The conventional threshold evaluation is based on events at a fixed return period. Anyway, the impacts of events with the same return period may be different along the river course due to the specific characteristics of the affected areas. A new approach is introduced. It defines different scenarios, corresponding to different flood impacts. A discharge threshold is then associated to each scenario and the return period of the scenario is computed backwards. Flood scenarios are defined in accordance with National Civil Protection guidelines, which describe the expected flood impact and associate a colour to the scenario from green (no relevant effects) to red (major floods). A range of discharges is associated with each scenario since they cause the same flood impact; the threshold is set as the discharge corresponding to the transition between two scenarios. A wide range of event-based information is used to estimate the thresholds. As first guess, the thresholds are estimated starting from hydraulic model outputs and the people or infrastructures flooded according to the simulations. Eventually the model estimates are validated with real event knowledge: local Civil Protection Emergency Plans usually contain very detailed local impact description at known river levels or discharges, RL-PC collects flooding information notified by the population, newspapers often report flood events on web, data from the river monitoring network provide evaluation of actually happened levels and discharges. The methodology allows to give a return period for each scenario. The return period may vary along the river course according to the discharges associated with the scenario. The values of return period may show the areas characterized by higher risk and can be an important basis for civil protection emergency planning and river monitoring. For example, considering the Lambro River, the red scenario (major flood) shows a return period of 50 years in the northern rural part of the catchment. When the river crosses the city of Milan, the return period drops to 4 years. Afterwards it goes up to more than 100 years when the river flows in the agricultural areas in the southern part of the catchment. In addition, the knowledge gained with event-based analysis allows evaluating the compliance of the monitoring network with early warning requirements and represents the starting point for further development of the network itself.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu’s method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production. PMID:22163940
Universal phase transition in community detectability under a stochastic block model.
Chen, Pin-Yu; Hero, Alfred O
2015-03-01
We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu's method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production.
Bowker, Matthew A.; Miller, Mark E.; Belote, R. Travis; Garman, Steven L.
2013-01-01
Threshold concepts are used in research and management of ecological systems to describe and interpret abrupt and persistent reorganization of ecosystem properties (Walker and Meyers, 2004; Groffman and others, 2006). Abrupt change, referred to as a threshold crossing, and the progression of reorganization can be triggered by one or more interactive disturbances such as land-use activities and climatic events (Paine and others, 1998). Threshold crossings occur when feedback mechanisms that typically absorb forces of change are replaced with those that promote development of alternative equilibria or states (Suding and others, 2004; Walker and Meyers, 2004; Briske and others, 2008). The alternative states that emerge from a threshold crossing vary and often exhibit reduced ecological integrity and value in terms of management goals relative to the original or reference system. Alternative stable states with some limited residual properties of the original system may develop along the progression after a crossing; an eventual outcome may be the complete loss of pre-threshold properties of the original ecosystem. Reverting to the more desirable reference state through ecological restoration becomes increasingly difficult and expensive along the progression gradient and may eventually become impossible. Ecological threshold concepts have been applied as a heuristic framework and to aid in the management of rangelands (Bestelmeyer, 2006; Briske and others, 2006, 2008), aquatic (Scheffer and others, 1993; Rapport and Whitford 1999), riparian (Stringham and others, 2001; Scott and others, 2005), and forested ecosystems (Allen and others, 2002; Digiovinazzo and others, 2010). These concepts are also topical in ecological restoration (Hobbs and Norton 1996; Whisenant 1999; Suding and others, 2004; King and Hobbs, 2006) and ecosystem sustainability (Herrick, 2000; Chapin and others, 1996; Davenport and others, 1998). Achieving conservation management goals requires the protection of resources within the range of desired conditions (Cook and others, 2010). The goal of conservation management for natural resources in the U.S. National Park System is to maintain native species and habitat unimpaired for the enjoyment of future generations. Achieving this goal requires, in part, early detection of system change and timely implementation of remediation. The recent National Park Service Inventory and Monitoring program (NPS I&M) was established to provide early warning of declining ecosystem conditions relative to a desired native or reference system (Fancy and others, 2009). To be an effective tool for resource protection, monitoring must be designed to alert managers of impending thresholds so that preventive actions can be taken. This requires an understanding of the ecosystem attributes and processes associated with threshold-type behavior; how these attributes and processes become degraded; and how risks of degradation vary among ecosystems and in relation to environmental factors such as soil properties, climatic conditions, and exposure to stressors. In general, the utility of the threshold concept for long-term monitoring depends on the ability of scientists and managers to detect, predict, and prevent the occurrence of threshold crossings associated with persistent, undesirable shifts among ecosystem states (Briske and others, 2006). Because of the scientific challenges associated with understanding these factors, the application of threshold concepts to monitoring designs has been very limited to date (Groffman and others, 2006). As a case in point, the monitoring efforts across the 32 NPS I&M networks were largely designed with the knowledge that they would not be used to their full potential until the development of a systematic method for understanding threshold dynamics and methods for estimating key attributes of threshold crossings. This report describes and demonstrates a generalized approach that we implemented to formalize understanding and estimating of threshold dynamics for terrestrial dryland ecosystems in national parks of the Colorado Plateau. We provide a structured approach to identify and describe degradation processes associated with threshold behavior and to estimate indicator levels that characterize the point at which a threshold crossing has occurred or is imminent (tipping points) or points where investigative or preventive management action should be triggered (assessment points). We illustrate this method for several case studies in national parks included in the Northern and Southern Colorado Plateau NPS I&M networks, where historical livestock grazing, climatic change, and invasive species are key agents of change. The approaches developed in these case studies are intended to enhance the design, effectiveness, and management-relevance of monitoring efforts in support of conservation management in dryland systems. They specifically enhance National Park Service (NPS) capacity for protecting park resources on the Colorado Plateau but have applicability to monitoring and conservation management of dryland ecosystems worldwide.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-01-01
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries. PMID:26314637
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-08-28
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries.
Effects of Directed Energy Weapons
1994-01-01
them, and led to the law of conser- vation of energy. 2. The estimate of the energy it takes to brew a cup of coffee as- sumes that it is a 6 oz cup...the thermal diffusivity of the target material (see Figure 1–5). We can use this result to estimate the threshold for melting. A laser of intensity S...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Regulating the medical loss ratio: implications for the individual market.
Abraham, Jean M; Karaca-Mandic, Pinar
2011-03-01
To provide state-level estimates of the size and structure of the US individual market for health insurance and to investigate the potential impact of new medical loss ratio (MLR) regulation in 2011, as indicated by the Patient Protection and Affordable Care Act (PPACA). Using data from the National Association of Insurance Commissioners, we provided state-level estimates of the size and structure of the US individual market from 2002 to 2009. We estimated the number of insurers expected to have MLRs below the legislated minimum and their corresponding enrollment. In the case of noncompliant insurers exiting the market, we estimated the number of enrollees that may be vulnerable to major coverage disruption given poor health status. In 2009, using a PPACA-adjusted MLR definition, we estimated that 29% of insurer-state observations in the individual market would have MLRs below the 80% minimum, corresponding to 32% of total enrollment. Nine states would have at least one-half of their health insurers below the threshold. If insurers below the MLR threshold exit the market, major coverage disruption could occur for those in poor health; we estimated the range to be between 104,624 and 158,736 member-years. The introduction of MLR regulation as part of the PPACA has the potential to significantly affect the functioning of the individual market for health insurance.
Gustafson, Samantha; Pittman, Andrea; Fanning, Robert
2013-06-01
This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.