King, Michael J.; Sanchez, Roberto J.; Moss, William C.
2013-03-19
A passive blast pressure sensor for detecting blast overpressures of at least a predetermined minimum threshold pressure. The blast pressure sensor includes a piston-cylinder arrangement with one end of the piston having a detection surface exposed to a blast event monitored medium through one end of the cylinder and the other end of the piston having a striker surface positioned to impact a contact stress sensitive film that is positioned against a strike surface of a rigid body, such as a backing plate. The contact stress sensitive film is of a type which changes color in response to at least a predetermined minimum contact stress which is defined as a product of the predetermined minimum threshold pressure and an amplification factor of the piston. In this manner, a color change in the film arising from impact of the piston accelerated by a blast event provides visual indication that a blast overpressure encountered from the blast event was not less than the predetermined minimum threshold pressure.
Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.
2013-04-16
A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph; Skofronick-Jackson, Gail
2012-01-01
During the middle part of this decade a wide variety of passive microwave imagers and sounders will be unified in the Global Precipitation Measurement (GPM) mission to provide a common basis for frequent (3 hr), global precipitation monitoring. The ability of these sensors to detect precipitation by discerning it from non-precipitating background depends upon the channels available and characteristics of the surface and atmosphere. This study quantifies the minimum detectable precipitation rate and fraction of precipitation detected for four representative instruments (TMI, GMI, AMSU-A, and AMSU-B) that will be part of the GPM constellation. Observations for these instruments were constructed from equivalent channels on the SSMIS instrument on DMSP satellites F16 and F17 and matched to precipitation data from NOAA's National Mosaic and QPE (NMQ) during 2009 over the continuous United States. A variational optimal estimation retrieval of non-precipitation surface and atmosphere parameters was used to determine the consistency between the observed brightness temperatures and these parameters, with high cost function values shown to be related to precipitation. The minimum detectable precipitation rate, defined as the lowest rate for which probability of detection exceeds 50%, and the detected fraction of precipitation, are reported for each sensor, surface type (ocean, coast, bare land, snow cover) and precipitation type (rain, mix, snow). The best sensors over ocean and bare land were GMI (0.22 mm/hr minimum threshold and 90% of precipitation detected) and AMSU (0.26 mm/hr minimum threshold and 81% of precipitation detected), respectively. Over coasts (0.74 mm/hr threshold and 12% detected) and snow-covered surfaces (0.44 mm/hr threshold and 23% detected), AMSU again performed best but with much lower detection skill, whereas TMI had no skill over these surfaces. The sounders (particularly over water) benefited from the use of re-analysis data (vs. climatology) to set the a-priori atmospheric state and all instruments benefit from the use of a conditional snow cover emissivity database over land. It is recommended that real-time sources of these data be used in the operational GPM precipitation algorithms.
Flores, Shahida; Sun, Jie; King, Jonathan; Budowle, Bruce
2014-05-01
The GlobalFiler™ Express PCR Amplification Kit uses 6-dye fluorescent chemistry to enable multiplexing of 21 autosomal STRs, 1 Y-STR, 1 Y-indel and the sex-determining marker amelogenin. The kit is specifically designed for processing reference DNA samples in a high throughput manner. Validation studies were conducted to assess the performance and define the limitations of this direct amplification kit for typing blood and buccal reference DNA samples on various punchable collection media. Studies included thermal cycling sensitivity, reproducibility, precision, sensitivity of detection, minimum detection threshold, system contamination, stochastic threshold and concordance. Results showed that optimal amplification and injection parameters for a 1.2mm punch from blood and buccal samples were 27 and 28 cycles, respectively, combined with a 12s injection on an ABI 3500xL Genetic Analyzer. Minimum detection thresholds were set at 100 and 120RFUs for 27 and 28 cycles, respectively, and it was suggested that data from positive amplification controls provided a better threshold representation. Stochastic thresholds were set at 250 and 400RFUs for 27 and 28 cycles, respectively, as stochastic effects increased with cycle number. The minimum amount of input DNA resulting in a full profile was 0.5ng, however, the optimum range determined was 2.5-10ng. Profile quality from the GlobalFiler™ Express Kit and the previously validated AmpFlSTR(®) Identifiler(®) Direct Kit was comparable. The validation data support that reliable DNA typing results from reference DNA samples can be obtained using the GlobalFiler™ Express PCR Amplification Kit. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi
2018-02-06
This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
Meixler, Lewis D.
1993-01-01
The low flow monitor provides a means for determining if a fluid flow meets a minimum threshold level of flow. The low flow monitor operates with a minimum of intrusion by the flow detection device into the flow. The electrical portion of the monitor is externally located with respect to the fluid stream which allows for repairs to the monitor without disrupting the flow. The electronics provide for the adjustment of the threshold level to meet the required conditions. The apparatus can be modified to provide an upper limit to the flow monitor by providing for a parallel electronic circuit which provides for a bracketing of the desired flow rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, W. Geoffrey; Gray, David Clinton
Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less
Event-related potential measures of gap detection threshold during natural sleep.
Muller-Gass, Alexandra; Campbell, Kenneth
2014-08-01
The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph
2013-01-01
There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
Multi-thresholds for fault isolation in the presence of uncertainties.
Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel
2016-05-01
Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Deming, D.; Espenak, F.; Jennings, D. E.; Brault, J. W.
1986-01-01
The threshold mass for the unambiguous spectroscopic detection of low mass companions to solar type stars is defined here as the time when the maximum acceleration in the stellar radial velocity due to the Doppler reflex of the companion exceeds the apparent acceleration produced by changes in convection. An apparent acceleration of 11 m/s/yr in integrated sunlight was measured using near infrared Fourier transform spectroscopy. This drift in the apparent solar velocity is attributed to a lessening in the magnetic inhibition of granular convection as solar minimum approaches. The threshold mass for spectroscopic detection of companions to a one solar mass star is estimated at below one Jupiter mass.
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren
2011-09-01
Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chida, K.; Yamauchi, Y.; Arakawa, T.
2013-12-04
We performed the resistively-detected nuclear magnetic resonance (RDNMR) to study the electron spin polarization in the non-equilibrium quantum Hall regime. By measuring the Knight shift, we derive source-drain bias voltage dependence of the electron spin polarization in quantum wires. The electron spin polarization shows minimum value around the threshold voltage of the dynamic nuclear polarization.
Minimum Energy-Variance Filters for the detection of compact sources in crowded astronomical images
NASA Astrophysics Data System (ADS)
Herranz, D.; Sanz, J. L.; López-Caniego, M.; González-Nuevo, J.
2006-10-01
In this paper we address the common problem of the detection and identification of compact sources, such as stars or far galaxies, in Astronomical images. The common approach, that consist in applying a matched filter to the data in order to remove noise and to search for intensity peaks above a certain detection threshold, does not work well when the sources to be detected appear in large number over small regions of the sky due to the effect of source overlapping and interferences among the filtered profiles of the sources. A new class of filter that balances noise removal with signal spatial concentration is introduced, then it is applied to simulated astronomical images of the sky at 857 GHz. We show that with the new filter it is possible to improve the ratio between true detections and false alarms with respect to the matched filter. For low detection thresholds, the improvement is ~ 40%.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Technical Reports Server (NTRS)
Simon, M. K.; Wang, C. C.
1984-01-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Astrophysics Data System (ADS)
Simon, M. K.; Wang, C. C.
1984-11-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Catcher... Allocation and Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to... threshold amount of 13,516 Column H Percent used to calculate IPA minimum participation Vessel name USCG...
Herrmann, H W; Kim, Y H; Young, C S; Fatherley, V E; Lopez, F E; Oertel, J A; Malone, R M; Rubery, M S; Horsfield, C J; Stoeffl, W; Zylstra, A B; Shmayda, W T; Batha, S H
2014-11-01
A new Gas Cherenkov Detector (GCD) with low-energy threshold and high sensitivity, currently known as Super GCD (or GCD-3 at OMEGA), is being developed for use at the OMEGA Laser Facility and the National Ignition Facility (NIF). Super GCD is designed to be pressurized to ≤400 psi (absolute) and uses all metal seals to allow the use of fluorinated gases inside the target chamber. This will allow the gamma energy threshold to be run as low at 1.8 MeV with 400 psi (absolute) of C2F6, opening up a new portion of the gamma ray spectrum. Super GCD operating at 20 cm from TCC will be ∼400 × more efficient at detecting DT fusion gammas at 16.7 MeV than the Gamma Reaction History diagnostic at NIF (GRH-6m) when operated at their minimum thresholds.
Reduced Sensitivity to Minimum-Jerk Biological Motion in Autism Spectrum Conditions
ERIC Educational Resources Information Center
Cook, Jennifer; Saygin, Ayse Pinar; Swain, Rachel; Blakemore, Sarah-Jayne
2009-01-01
We compared psychophysical thresholds for biological and non-biological motion detection in adults with autism spectrum conditions (ASCs) and controls. Participants watched animations of a biological stimulus (a moving hand) or a non-biological stimulus (a falling tennis ball). The velocity profile of the movement was varied between 100% natural…
The magnetic sense and its use in long-distance navigation by animals.
Walker, Michael M; Dennis, Todd E; Kirschvink, Joseph L
2002-12-01
True navigation by animals is likely to depend on events occurring in the individual cells that detect magnetic fields. Minimum thresholds of detection, perception and 'interpretation' of magnetic field stimuli must be met if animals are to use a magnetic sense to navigate. Recent technological advances in animal tracking devices now make it possible to test predictions from models of navigation based on the use of variations in magnetic intensity.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
Temporal resolution in children.
Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D
1989-06-01
The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?
Boyen, Kris; Başkent, Deniz; van Dijk, Pim
2015-01-01
Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss-matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans.
Glenn, Nancy F.; Neuenschwander, Amy; Vierling, Lee A.; Spaete, Lucas; Li, Aihua; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan
2016-01-01
To estimate the potential synergies of OLI and ICESat-2 we used simulated ICESat-2 photon data to predict vegetation structure. In a shrubland environment with a vegetation mean height of 1 m and mean vegetation cover of 33%, vegetation photons are able to explain nearly 50% of the variance in vegetation height. These results, and those from a comparison site, suggest that a lower detection threshold of ICESat-2 may be in the range of 30% canopy cover and roughly 1 m height in comparable dryland environments and these detection thresholds could be used to combine future ICESat-2 photon data with OLI spectral data for improved vegetation structure. Overall, the synergistic use of Landsat 8 and ICESat-2 may improve estimates of above-ground biomass and carbon storage in drylands that meet these minimum thresholds, increasing our ability to monitor drylands for fuel loading and the potential to sequester carbon.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Use of real-time PCR to detect canine parvovirus in feces of free-ranging wolves.
Mech, L David; Almberg, Emily S; Smith, Douglas; Goyal, Sagar; Singer, Randall S
2012-04-01
Using real-time PCR, we tested 15 wolf (Canis lupus) feces from the Superior National Forest (SNF), Minnesota, USA, and 191 from Yellowstone National Park (YNP), USA, collected during summer and 13 during winter for canine parvovirus (CPV)-2 DNA. We also tested 20 dog feces for CPV-2 DNA. The PCR assay was 100% sensitive and specific with a minimum detection threshold of 10(4) 50% tissue culture infective dose. Virus was detected in two winter specimens but none of the summer specimens. We suggest applying the technique more broadly especially with winter feces.
Use of real-time PCR to detect canine parvovirus in feces of free-ranging wolves
Mech, L. David; Almberg, Emily S.; Smith, Douglas; Goyal, Sagar; Singer, Randall S.
2012-01-01
Using real-time PCR, we tested 15 wolf (Canis lupus) feces from the Superior National Forest (SNF), Minnesota, USA, and 191 from Yellowstone National Park (YNP), USA, collected during summer and 13 during winter for canine parvovirus (CPV)-2 DNA. We also tested 20 dog feces for CPV-2 DNA. The PCR assay was 100% sensitive and specific with a minimum detection threshold of 104 50% tissue culture infective dose. Virus was detected in two winter specimens but none of the summer specimens. We suggest applying the technique more broadly especially with winter feces.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
Age-related changes in perception of movement in driving scenes.
Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M
2014-07-01
Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Mothership Under... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each...-out allocation (2,220) Column G Number of Chinook salmon deducted from the annual threshold amount of...
NASA Astrophysics Data System (ADS)
Bley, S.; Deneke, H.
2013-10-01
A threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the Meteosat SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low-resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures that cannot be detected by the low-resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behavior for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test data set depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as an estimate of cloud fraction. The HRV cloud mask aims for small-scale convective sub-pixel clouds that are missed by the EUMETSAT cloud mask. The major limit of the HRV cloud mask is the minimum cloud optical thickness (COT) that can be detected. This threshold COT was found to be about 0.8 over ocean and 2 over land and is highly related to the albedo of the underlying surface.
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?
Boyen, Kris; Başkent, Deniz
2015-01-01
Objectives: Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Design: Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss–matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Results: Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. Conclusions: These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans. PMID:25822647
GOES Cloud Detection at the Global Hydrology and Climate Center
NASA Technical Reports Server (NTRS)
Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)
2002-01-01
The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.
Müllner, Marie; Schlattl, Helmut; Hoeschen, Christoph; Dietrich, Olaf
2015-12-01
To demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent. A Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR). The simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies. Gold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Orion MPCV Touchdown Detection Threshold Development and Testing
NASA Technical Reports Server (NTRS)
Daum, Jared; Gay, Robert
2013-01-01
A robust method of detecting Orion Multi-Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post-landing segment of the Flight Software (FSW). An in-depth trade study was completed to determine optimal characteristics of the touchdown detection method resulting in an algorithm monitoring filtered, lever-arm corrected, 200 Hz Inertial Measurement Unit (IMU) vehicle acceleration magnitude data against a tunable threshold using persistence counter logic. Following the design of the algorithm, high fidelity environment and vehicle simulations, coupled with the actual vehicle FSW, were used to tune the acceleration threshold and persistence counter value to result in adequate performance in detecting touchdown and sufficient safety margin against early detection while descending under parachutes. An analytical approach including Kriging and adaptive sampling allowed for a sufficient number of finite element analysis (FEA) impact simulations to be completed using minimal computation time. The combination of a persistence counter of 10 and an acceleration threshold of approximately 57.3 ft/s2 resulted in an impact performance factor of safety (FOS) of 1.0 and a safety FOS of approximately 2.6 for touchdown declaration. An RCS termination acceleration threshold of approximately 53.1 ft/s(exp)2 with a persistence counter of 10 resulted in an increased impact performance FOS of 1.2 at the expense of a lowered under-parachutes safety factor of 2.2. The resulting tuned algorithm was then tested on data from eight Capsule Parachute Assembly System (CPAS) flight tests, showing an experimental minimum safety FOS of 6.1. The formulated touchdown detection algorithm will be flown on the Orion MPCV FSW during the Exploration Flight Test 1 (EFT-1) mission in the second half of 2014.
50 CFR 622.48 - Adjustment of management measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... biomass achieved by fishing at MSY (BMSY) (or proxy), maximum fishing mortality threshold (MFMT), minimum... biomass achieved by fishing at MSY (BMSY), minimum stock size threshold (MSST), and maximum fishing.... MSY, OY, and TAC. (f) South Atlantic snapper-grouper and wreckfish. Biomass levels, age-structured...
Spectroscopic planetary detection
NASA Technical Reports Server (NTRS)
Deming, D.; Espenak, F.; Hillman, J. J.; Kostiuk, T.; Mumma, M. J.; Jennings, D. E.
1986-01-01
The Sun-as-a-star was monitored using the McMath Fourier transform spectometer (FTS) on Kitt Peak in 1983. In 1985 the first measurement was made using the laser heterodyne technique. The FTS measurements now extend for three years, with errors of order 3 meters/sec at a given epoch. Over this 3 year period, a 33 meter/sec change was measured in the apparent velocity of integrated sunlight. The sense of the effect is that a greater blueshift is seen near solar minimum, which is consistent with expectations based on considering the changing morphology of solar granular convection. Presuming this effect is solar-cycle-related, it will mimic the Doppler reflex produced by a planetary companion of approximately two Jupiter masses, with an 11 year orbital period. Thus, Jupiter itself is below the threshold for detection by spectroscopic means, without an additional technique for discrimination. However, for planetary companions in shorter period orbits (P approx. 3 years) the threshold for unambiguous detection is well below one Jupiter mass.
Device for monitoring cell voltage
Doepke, Matthias [Garbsen, DE; Eisermann, Henning [Edermissen, DE
2012-08-21
A device for monitoring a rechargeable battery having a number of electrically connected cells includes at least one current interruption switch for interrupting current flowing through at least one associated cell and a plurality of monitoring units for detecting cell voltage. Each monitoring unit is associated with a single cell and includes a reference voltage unit for producing a defined reference threshold voltage and a voltage comparison unit for comparing the reference threshold voltage with a partial cell voltage of the associated cell. The reference voltage unit is electrically supplied from the cell voltage of the associated cell. The voltage comparison unit is coupled to the at least one current interruption switch for interrupting the current of at least the current flowing through the associated cell, with a defined minimum difference between the reference threshold voltage and the partial cell voltage.
No minimum threshold for ozone-induced changes in soybean canopy fluxes
USDA-ARS?s Scientific Manuscript database
Tropospheric ozone concentrations [O3] are increasing at rates that exceed any other pollutant. This highly reactive gas drives reductions in plant productivity and canopy water use while also increasing canopy temperature and sensible heat flux. It is not clear whether a minimum threshold of ozone ...
R-on-1 automatic mapping: A new tool for laser damage testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hue, J.; Garrec, P.; Dijon, J.
1996-12-31
Laser damage threshold measurement is statistical in nature. For a commercial qualification or for a user, the threshold determined by the weakest point is a satisfactory characterization. When a new coating is designed, threshold mapping is very useful. It enables the technology to be improved and followed more accurately. Different statistical parameters such as the minimum, maximum, average, and standard deviation of the damage threshold as well as spatial parameters such as the threshold uniformity of the coating can be determined. Therefore, in order to achieve a mapping, all the tested sites should give data. This is the major interestmore » of the R-on-1 test in spite of the fact that the laser damage threshold obtained by this method may be different from the 1-on-1 test (smaller or greater). Moreover, on the damage laser test facility, the beam size is smaller (diameters of a few hundred micrometers) than the characteristic sizes of the components in use (diameters of several centimeters up to one meter). Hence, a laser damage threshold mapping appears very interesting, especially for applications linked to large optical components like the Megajoule project or the National Ignition Facility (N.I.F). On the test bench used, damage detection with a Nomarski microscope and scattered light measurement are almost equivalent. Therefore, it becomes possible to automatically detect on line the first defects induced by YAG irradiation. Scattered light mappings and laser damage threshold mappings can therefore be achieved using a X-Y automatic stage (where the test sample is located). The major difficulties due to the automatic capabilities are shown. These characterizations are illustrated at 355 nm. The numerous experiments performed show different kinds of scattering curves, which are discussed in relation with the damage mechanisms.« less
Minimum change in spherical aberration that can be perceived
Manzanera, Silvestre; Artal, Pablo
2016-01-01
It is important to know the visual sensitivity to optical blur from both a basic science perspective and a practical point of view. Of particular interest is the sensitivity to blur induced by spherical aberration because it is being used to increase depth of focus as a component of a presbyopic solution. Using a flicker detection-based procedure implemented on an adaptive optics visual simulator, we measured the spherical aberration thresholds that produce just-noticeable differences in perceived image quality. The thresholds were measured for positive and negative values of spherical aberration, for best focus and + 0.5 D and + 1.0 D of defocus. At best focus, the SA thresholds were 0.20 ± 0.01 µm and −0.17 ± 0.03 µm for positive and negative spherical aberration respectively (referred to a 6-mm pupil). These experimental values may be useful in setting spherical aberration permissible levels in different ophthalmic techniques. PMID:27699113
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
First images of a digital autoradiography system based on a Medipix2 hybrid silicon pixel detector.
Mettivier, Giovanni; Montesi, Maria Cristina; Russo, Paolo
2003-06-21
We present the first images of beta autoradiography obtained with the high-resolution hybrid pixel detector consisting of the Medipix2 single photon counting read-out chip bump-bonded to a 300 microm thick silicon pixel detector. This room temperature system has 256 x 256 square pixels of 55 microm pitch (total sensitive area of 14 x 14 mm2), with a double threshold discriminator and a 13-bit counter in each pixel. It is read out via a dedicated electronic interface and control software, also developed in the framework of the European Medipix2 Collaboration. Digital beta autoradiograms of 14C microscale standard strips (containing separate bands of increasing specific activity in the range 0.0038-32.9 kBq g(-1)) indicate system linearity down to a total background noise of 1.8 x 10(-3) counts mm(-2) s(-1). The minimum detectable activity is estimated to be 0.012 Bq for 36,000 s exposure and 0.023 Bq for 10,800 s exposure. The measured minimum detection threshold is less than 1600 electrons (equivalent to about 6 keV Si). This real-time system for beta autoradiography offers lower pixel pitch and higher sensitive area than the previous Medipix1-based system. It has a 14C sensitivity better than that of micro channel plate based systems, which, however, shows higher spatial resolution and sensitive area.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
Toward development of mobile application for hand arthritis screening.
Akhbardeh, Farhad; Vasefi, Fartash; Tavakolian, Kouhyar; Bradley, David; Fazel-Rezai, Reza
2015-01-01
Arthritis is one of the most common health problems affecting people throughout the world. The goal of the work presented in this paper is to provide individuals, who may be developing or have developed arthritis, with a mobile application to assess and monitor the progress of their disease using their smartphone. The image processing algorithm includes finger border detection algorithm to monitor joint thickness and angular deviation abnormalities, which are common symptoms of arthritis. In this work, we have analyzed and compared gradient, thresholding and Canny algorithms for border detection. The effect of image spatial resolution (down-sampling) is also investigated. The results calculated based on 36 joint measurements show that the mean errors for gradient, thresholding, and Canny methods are 0.20, 2.13, and 2.03 mm, respectively. In addition, the average error for different image resolutions is analyzed and the minimum required resolution is determined for each method. The results confirm that recent smartphone imaging capabilities can provide enough accuracy for hand border detection and finger joint analysis based on gradient method.
NASA Astrophysics Data System (ADS)
Baxter, D.; Chen, C. J.; Crisler, M.; Cwiok, T.; Dahl, C. E.; Grimsted, A.; Gupta, J.; Jin, M.; Puig, R.; Temples, D.; Zhang, J.
2017-06-01
A 30-g xenon bubble chamber, operated at Northwestern University in June and November 2016, has for the first time observed simultaneous bubble nucleation and scintillation by nuclear recoils in a superheated liquid. This chamber is instrumented with a CCD camera for near-IR bubble imaging, a solar-blind photomultiplier tube to detect 175-nm xenon scintillation light, and a piezoelectric acoustic transducer to detect the ultrasonic emission from a growing bubble. The time of nucleation determined from the acoustic signal is used to correlate specific scintillation pulses with bubble-nucleating events. We report on data from this chamber for thermodynamic "Seitz" thresholds from 4.2 to 15.0 keV. The observed single- and multiple-bubble rates when exposed to a
Baxter, D.; Chen, C. J.; Crisler, M.; ...
2017-06-08
A 30-g xenon bubble chamber, operated at Northwestern University in June and November 2016, has for the first time observed simultaneous bubble nucleation and scintillation by nuclear recoils in a superheated liquid. This chamber is instrumented with a CCD camera for near-IR bubble imaging, a solar-blind photomultiplier tube to detect 175-nm xenon scintillation light, and a piezoelectric acoustic transducer to detect the ultrasonic emission from a growing bubble. The time of nucleation determined from the acoustic signal is used to correlate specific scintillation pulses with bubble-nucleating events. We report on data from this chamber for thermodynamic "Seitz" thresholds from 4.2 to 15.0 keV. The observed single- and multiple-bubble rates when exposed to amore » $$^{252}$$Cf neutron source indicate that, for an 8.3-keV thermodynamic threshold, the minimum nuclear recoil energy required to nucleate a bubble is $$19\\pm6$$ keV (1$$\\sigma$$ uncertainty). This is consistent with the observed scintillation spectrum for bubble-nucleating events. We see no evidence for bubble nucleation by gamma rays at any of the thresholds studied, setting a 90% C.L. upper limit of $$6.3\\times10^{-7}$$ bubbles per gamma interaction at a 4.2-keV thermodynamic threshold. This indicates stronger gamma discrimination than in CF$$_3$$I bubble chambers, supporting the hypothesis that scintillation production suppresses bubble nucleation by electron recoils while nuclear recoils nucleate bubbles as usual. Finally, these measurements establish the noble-liquid bubble chamber as a promising new technology for the detection of weakly interacting massive particle dark matter and coherent elastic neutrino-nucleus scattering.« less
[A mobile sensor for remote detection of natural gas leakage].
Zhang, Shuai; Liu, Wen-qing; Zhang, Yu-jun; Kan, Rui-feng; Ruan, Jun; Wang, Li-ming; Yu, Dian-qiang; Dong, Jin-ting; Han, Xiao-lei; Cui, Yi-ben; Liu, Jian-guo
2012-02-01
The detection of natural gas pipeline leak becomes a significant issue for body security, environmental protection and security of state property. However, the leak detection is difficult, because of the pipeline's covering many areas, operating conditions and complicated environment. A mobile sensor for remote detection of natural gas leakage based on scanning wavelength differential absorption spectroscopy (SWDAS) is introduced. The improved soft threshold wavelet denoising was proposed by analyzing the characteristics of reflection spectrum. And the results showed that the signal to noise ratio (SNR) was increased three times. When light intensity is 530 nA, the minimum remote sensitivity will be 80 ppm x m. A widely used SWDAS can make quantitative remote sensing of natural gas leak and locate the leak source precisely in a faster, safer and more intelligent way.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
Katiyar, Amit; Sarkar, Kausik
2012-11-01
A recent study [Katiyar and Sarkar (2011). J. Acoust. Soc. Am. 130, 3137-3147] showed that in contrast to the analytical result for free bubbles, the minimum threshold for subharmonic generation for contrast microbubbles does not necessarily occur at twice the resonance frequency. Here increased damping-either due to the small radius or the encapsulation-is shown to shift the minimum threshold away from twice the resonance frequency. Free bubbles as well as four models of the contrast agent encapsulation are investigated varying the surface dilatational viscosity. Encapsulation properties are determined using measured attenuation data for a commercial contrast agent. For sufficiently small damping, models predict two minima for the threshold curve-one at twice the resonance frequency being lower than the other at resonance frequency-in accord with the classical analytical result. However, increased damping damps the bubble response more at twice the resonance than at resonance, leading to a flattening of the threshold curve and a gradual shift of the absolute minimum from twice the resonance frequency toward the resonance frequency. The deviation from the classical result stems from the fact that the perturbation analysis employed to obtain it assumes small damping, not always applicable for contrast microbubbles.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Henderson, Steven; Woods-Fry, Heather; Collin, Charles A; Gagnon, Sylvain; Voloaca, Misha; Grant, John; Rosenthal, Ted; Allen, Wade
2015-05-01
Our research group has previously demonstrated that the peripheral motion contrast threshold (PMCT) test predicts older drivers' self-report accident risk, as well as simulated driving performance. However, the PMCT is too lengthy to be a part of a battery of tests to assess fitness to drive. Therefore, we have developed a new version of this test, which takes under two minutes to administer. We assessed the motion contrast thresholds of 24 younger drivers (19-32) and 25 older drivers (65-83) with both the PMCT-10min and the PMCT-2min test and investigated if thresholds were associated with measures of simulated driving performance. Younger participants had significantly lower motion contrast thresholds than older participants and there were no significant correlations between younger participants' thresholds and any measures of driving performance. The PMCT-10min and the PMCT-2min thresholds of older drivers' predicted simulated crash risk, as well as the minimum distance of approach to all hazards. This suggests that our tests of motion processing can help predict the risk of collision or near collision in older drivers. Thresholds were also correlated with the total lane deviation time, suggesting a deficiency in processing of peripheral flow and delayed detection of adjacent cars. The PMCT-2min is an improved version of a previously validated test, and it has the potential to help assess older drivers' fitness to drive. Copyright © 2015 Elsevier Ltd. All rights reserved.
An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.
Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P
2009-01-01
Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.
Mukherjee, Anadi; Dunayevskiy, Ilya; Prasanna, Manu; Go, Rowel; Tsekoun, Alexei; Wang, Xiaojun; Fan, Jenyu; Patel, C Kumar N
2008-04-01
The need for the detection of chemical warfare agents (CWAs) is no longer confined to battlefield environments because of at least one confirmed terrorist attack, the Tokyo Subway [Emerg. Infect. Dis. 5, 513 (1999)] in 1995, and a suspected, i.e., a false-alarm of a CWA in the Russell Senate Office Building [Washington Post, 9 February 2006, p. B01]. Therefore, detection of CWAs with high sensitivity and low false-alarm rates is considered an important priority for ensuring public safety. We report a minimum detection level for a CWA simulant, dimethyl methyl phosphonate (DMMP), of <0.5 ppb (parts in 10(9)) by use of a widely tunable external grating cavity quantum cascade laser and photoacoustic spectroscopy. With interferents present in Santa Monica, California street air, we demonstrate a false-alarm rate of 1:10(6) at a detection threshold of 1.6 ppb.
Wu, Xiaocheng; Lang, Lingling; Ma, Wenjun; Song, Tie; Kang, Min; He, Jianfeng; Zhang, Yonghui; Lu, Liang; Lin, Hualiang; Ling, Li
2018-07-01
Dengue fever is an important infectious disease in Guangzhou, China; previous studies on the effects of weather factors on the incidence of dengue fever did not consider the linearity of the associations. This study evaluated the effects of daily mean temperature, relative humidity and rainfall on the incidence of dengue fever. A generalized additive model with splines smoothing function was performed to examine the effects of daily mean, minimum and maximum temperatures, relative humidity and rainfall on incidence of dengue fever during 2006-2014. Our analysis detected a non-linear effect of mean, minimum and maximum temperatures and relative humidity on dengue fever with the thresholds at 28°C, 23°C and 32°C for daily mean, minimum and maximum temperatures, 76% for relative humidity, respectively. Below the thresholds, there was a significant positive effect, the excess risk in dengue fever for each 1°C in the mean temperature at lag7-14days was 10.21%, (95% CI: 6.62% to 13.92%), 7.10% (95% CI: 4.99%, 9.26%) for 1°C increase in daily minimum temperature in lag 11days, and 2.27% (95% CI: 0.84%, 3.72%) for 1°C increase in daily maximum temperature in lag 10days; and each 1% increase in relative humidity of lag7-14days was associated with 1.95% (95% CI: 1.21% to 2.69%) in risk of dengue fever. Future prevention and control measures and epidemiology studies on dengue fever should consider these weather factors based on their exposure-response relationship. Copyright © 2018. Published by Elsevier B.V.
Corona-Strauss, Farah I; Delb, Wolfgang; Bloching, Marc; Strauss, Daniel J
2008-01-01
We have recently shown that click evoked auditory brainstem responses (ABRs) single sweeps can efficiently be processed by a hybrid novelty detection system. This approach allowed for the objective detection of hearing thresholds in a fraction of time of conventional schemes, making it appropriate for the efficient implementation of newborn hearing screening procedures. It is the objective of this study to evaluate whether this approach might further be improved by different stimulation paradigms and electrode settings. In particular, we evaluate chirp stimulations which compensate the basilar-membrane dispersion and active electrodes which are less sensitive to movements. This is the first study which is directed to a single sweep processing of chirp evoked ABRs. By concentrating on transparent features and a minimum number of adjustable parameters, we present an objective comparison of click vs.chirp stimulations and active vs. passive electrodes in the ultrafast ABR detection. We show that chirp evoked brainstem responses and active electrodes might improve the single sweeps analysis of ABRs.Consequently, we conclude that a single sweep processing of ABRs for the objective determination of hearing thresholds can further be improved by the use of optimized chirp stimulations and active electrodes.
Quantification of Agrobacterium tumefaciens C58 attachment to Arabidopsis thaliana roots.
Petrovicheva, Anna; Joyner, Jessica; Muth, Theodore R
2017-10-02
Agrobacterium tumefaciens is the causal agent of crown gall disease and is a vector for DNA transfer in transgenic plants. The transformation process by A. tumefaciens has been widely studied, but the attachment stage has not been well characterized. Most measurements of attachment have used microscopy and colony counting, both of which are labor and time intensive. To reduce the time and effort required to analyze bacteria attaching to plant tissues, we developed a quantitative real-time PCR (qPCR) assay to quantify attached A. tumefaciens using the chvE gene as marker for the presence of the bacteria. The qPCR detection threshold of A. tumefaciens from pure culture was 104 cell equivalents/ml. The A. tumefaciens minimum threshold concentration from root-bound populations was determined to be 105 cell equivalents/ml inoculum to detect attachment above background. The qPCR assay can be used for measuring A. tumefaciens attachment in applications such as testing the effects of mutations on bacterial adhesion molecules or biofilm formation, comparing attachment across various plant species and ecotypes, and detecting mutations in putative attachment receptors expressed in plant roots. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dartnall, Tamara J; Rogasch, Nigel C; Nordstrom, Michael A; Semmler, John G
2009-07-01
The purpose of this study was to determine the effect of eccentric muscle damage on recruitment threshold force and repetitive discharge properties of low-threshold motor units. Ten subjects performed four tasks involving isometric contraction of elbow flexors while electromyographic (EMG) data were recorded from human biceps brachii and brachialis muscles. Tasks were 1) maximum voluntary contraction (MVC); 2) constant-force contraction at various submaximal targets; 3) motor unit recruitment threshold task; and 4) minimum motor unit discharge rate task. These tasks were performed on three separate days before, immediately after, and 24 h after eccentric exercise of elbow flexor muscles. MVC force declined (42%) immediately after exercise and remained depressed (29%) 24 h later, indicative of muscle damage. Mean motor unit recruitment threshold for biceps brachii was 8.4+/-4.2% MVC, (n=34) before eccentric exercise, and was reduced by 41% (5.0+/-3.0% MVC, n=34) immediately after and by 39% (5.2+/-2.5% MVC, n=34) 24 h after exercise. No significant changes in motor unit recruitment threshold were observed in the brachialis muscle. However, for the minimum tonic discharge rate task, motor units in both muscles discharged 11% faster (10.8+/-2.0 vs. 9.7+/-1.7 Hz) immediately after (n=29) exercise compared with that before (n=32). The minimum discharge rate variability was greater in brachialis muscle immediately after exercise (13.8+/-3.1%) compared with that before (11.9+/-3.1%) and 24 h after exercise (11.7+/-2.4%). No significant changes in minimum discharge rate variability were observed in the biceps brachii motor units after exercise. These results indicate that muscle damage from eccentric exercise alters motor unit recruitment thresholds for >or=24 h, but the effect is not the same in the different elbow flexor muscles.
On the theoretical aspects of improved fog detection and prediction in India
NASA Astrophysics Data System (ADS)
Dey, Sagnik
2018-04-01
The polluted Indo-Gangetic Basin (IGB) in northern India experiences fog (a condition when visibility degrades below 1 km) every winter (Dec-Jan) causing a massive loss of economy and even loss of life due to accidents. This can be minimized by improved fog detection (especially at night) and forecasting so that activities can be reorganized accordingly. Satellites detect fog at night by a positive brightness temperature difference (BTD). However, fixing the right BTD threshold holds the key to accuracy. Here I demonstrate the sensitivity of BTD in response to changes in fog and surface emissivity and their temperatures and justify a new BTD threshold. Further I quantify the dependence of critical fog droplet number concentration, NF (i.e. minimum fog concentration required to degrade visibility below 1 km) on liquid water content (LWC). NF decreases exponentially with an increase in LWC from 0.01 to 1 g/m3, beyond which it stabilizes. A 10 times low bias in simulated LWC below 1 g/m3 would require 107 times higher aerosol concentration to form the required number of fog droplets. These results provide the theoretical aspects that will help improving the existing fog detection algorithm and fog forecasting by numerical models in India.
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
von Hillebrandt-Andrade, C.; Huerfano Moreno, V. A.; McNamara, D. E.; Saurel, J. M.
2014-12-01
The magnitude-9.3 Sumatra-Andaman Islands earthquake of December 26, 2004, increased global awareness to the destructive hazard of earthquakes and tsunamis. Post event assessments of global coastline vulnerability highlighted the Caribbean as a region of high hazard and risk and that it was poorly monitored. Nearly 100 tsunamis have been reported for the Caribbean region and Adjacent Regions in the past 500 years and continue to pose a threat for its nations, coastal areas along the Gulf of Mexico, and the Atlantic seaboard of North and South America. Significant efforts to improve monitoring capabilities have been undertaken since this time including an expansion of the United States Geological Survey (USGS) Global Seismographic Network (GSN) (McNamara et al., 2006) and establishment of the United Nations Educational, Scientific and Cultural Organization (UNESCO) Intergovernmental Coordination Group (ICG) for the Tsunami and other Coastal Hazards Warning System for the Caribbean and Adjacent Regions (CARIBE EWS). The minimum performance standards it recommended for initial earthquake locations include: 1) Earthquake detection within 1 minute, 2) Minimum magnitude threshold = M4.5, and 3) Initial hypocenter error of <30 km. In this study, we assess current compliance with performance standards and model improvements in earthquake and tsunami monitoring capabilities in the Caribbean region since the first meeting of the UNESCO ICG-Caribe EWS in 2006. The three measures of network capability modeled in this study are: 1) minimum Mw detection threshold; 2) P-wave detection time of an automatic processing system and; 3) theoretical earthquake location uncertainty. By modeling three measures of seismic network capability, we can optimize the distribution of ICG-Caribe EWS seismic stations and select an international network that will be contributed from existing real-time broadband national networks in the region. Sea level monitoring improvements both offshore and along the coast will also be addressed. With the support of Member States and other countries and organizations it has been possible to significantly expand the sea level network thus reducing the amount of time it now takes to verify tsunamis.
Of Detection Limits and Effective Mitigation: The Use of Infrared Cameras for Methane Leak Detection
NASA Astrophysics Data System (ADS)
Ravikumar, A. P.; Wang, J.; McGuire, M.; Bell, C.; Brandt, A. R.
2017-12-01
Mitigating methane emissions, a short-lived and potent greenhouse gas, is critical to limiting global temperature rise to two degree Celsius as outlined in the Paris Agreement. A major source of anthropogenic methane emissions in the United States is the oil and gas sector. To this effect, state and federal governments have recommended the use of optical gas imaging systems in periodic leak detection and repair (LDAR) surveys to detect for fugitive emissions or leaks. The most commonly used optical gas imaging systems (OGI) are infrared cameras. In this work, we systematically evaluate the limits of infrared (IR) camera based OGI system for use in methane leak detection programs. We analyze the effect of various parameters that influence the minimum detectable leak rates of infrared cameras. Blind leak detection tests were carried out at the Department of Energy's MONITOR natural gas test-facility in Fort Collins, CO. Leak sources included natural gas wellheads, separators, and tanks. With an EPA mandated 60 g/hr leak detection threshold for IR cameras, we test leak rates ranging from 4 g/hr to over 350 g/hr at imaging distances between 5 ft and 70 ft from the leak source. We perform these experiments over the course of a week, encompassing a wide range of wind and weather conditions. Using repeated measurements at a given leak rate and imaging distance, we generate detection probability curves as a function of leak-size for various imaging distances, and measurement conditions. In addition, we estimate the median detection threshold - leak-size at which the probability of detection is 50% - under various scenarios to reduce uncertainty in mitigation effectiveness. Preliminary analysis shows that the median detection threshold varies from 3 g/hr at an imaging distance of 5 ft to over 150 g/hr at 50 ft (ambient temperature: 80 F, winds < 4 m/s). Results from this study can be directly used to improve OGI based LDAR protocols and reduce uncertainty in estimated mitigation effectiveness. Furthermore, detection limits determined in this study can be used as standards to compare new detection technologies.
Elliott, Grant P
2012-07-01
Given the widespread and often dramatic influence of climate change on terrestrial ecosystems, it is increasingly common for abrupt threshold changes to occur, yet explicitly testing for climate and ecological regime shifts is lacking in climatically sensitive upper treeline ecotones. In this study, quantitative evidence based on empirical data is provided to support the key role of extrinsic, climate-induced thresholds in governing the spatial and temporal patterns of tree establishment in these high-elevation environments. Dendroecological techniques were used to reconstruct a 420-year history of regeneration dynamics within upper treeline ecotones along a latitudinal gradient (approximately 44-35 degrees N) in the Rocky Mountains. Correlation analysis was used to assess the possible influence of minimum and maximum temperature indices and cool-season (November-April) precipitation on regional age-structure data. Regime-shift analysis was used to detect thresholds in tree establishment during the entire period of record (1580-2000), temperature variables significantly Correlated with establishment during the 20th century, and cool-season precipitation. Tree establishment was significantly correlated with minimum temperature during the spring (March-May) and cool season. Regime-shift analysis identified an abrupt increase in regional tree establishment in 1950 (1950-1954 age class). Coincident with this period was a shift toward reduced cool-season precipitation. The alignment of these climate conditions apparently triggered an abrupt increase in establishment that was unprecedented during the period of record. Two main findings emerge from this research that underscore the critical role of climate in governing regeneration dynamics within upper treeline ecotones. (1) Regional climate variability is capable of exceeding bioclimatic thresholds, thereby initiating synchronous and abrupt changes in the spatial and temporal patterns of tree establishment at broad regional scales. (2) The importance of climate parameters exceeding critical threshold values and triggering a regime shift in tree establishment appears to be contingent on the alignment of favorable temperature and moisture regimes. This research suggests that threshold changes in the climate system can fundamentally alter regeneration dynamics within upper treeline ecotones and, through the use of regime-shift analysis, reveals important climate-vegetation linkages.
The method of pulsed x-ray detection with a diode laser.
Liu, Jun; Ouyang, Xiaoping; Zhang, Zhongbing; Sheng, Liang; Chen, Liang; Tan, Xinjian; Weng, Xiufeng
2016-12-01
A new class of pulsed X-ray detection methods by sensing carrier changes in a diode laser cavity has been presented and demonstrated. The proof-of-principle experiments on detecting pulsed X-ray temporal profile have been done through the diode laser with a multiple quantum well active layer. The result shows that our method can achieve the aim of detecting the temporal profile of a pulsed X-ray source. We predict that there is a minimum value for the pre-bias current of the diode laser by analyzing the carrier rate equation, which exists near the threshold current of the diode laser chip in experiments. This behaviour generally agrees with the characterizations of theoretical analysis. The relative sensitivity is estimated at about 3.3 × 10 -17 C ⋅ cm 2 . We have analyzed the time scale of about 10 ps response with both rate equation and Monte Carlo methods.
Oxygen Concentration Flammability Threshold Tests for the Constellation Program
NASA Technical Reports Server (NTRS)
Williams, James H.
2007-01-01
CEV atmosphere will likely change because craft will be used as LEO spacecraft, lunar spacecraft, orbital spacecraft. Possible O2 % increase and overall pressure decrease pressure vessel certs on spacecraft. Want 34% minimum threshold. Higher, better when atmosphere changes. WSTF suggests testing all materials/components to find flammability threshold, pressure and atmosphere.
Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A
2006-02-01
Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.
A FPGA-based Measurement System for Nonvolatile Semiconductor Memory Characterization
NASA Astrophysics Data System (ADS)
Bu, Jiankang; White, Marvin
2002-03-01
Low voltage, long retention, high density SONOS nonvolatile semiconductor memory (NVSM) devices are ideally suited for PCMCIA, FLASH and 'smart' cards. The SONOS memory transistor requires characterization with an accurate, rapid measurement system with minimum disturbance to the device. The FPGA-based measurement system includes three parts: 1) a pattern generator implemented with XILINX FPGAs and corresponding software, 2) a high-speed, constant-current, threshold voltage detection circuit, 3) and a data evaluation program, implemented with a LABVIEW program. Fig. 1 shows the general block diagram of the FPGA-based measurement system. The function generator is designed and simulated with XILINX Foundation Software. Under the control of the specific erase/write/read pulses, the analog detect circuit applies operational modes to the SONOS device under test (DUT) and determines the change of the memory-state of the SONOS nonvolatile memory transistor. The TEK460 digitizes the analog threshold voltage output and sends to the PC computer. The data is filtered and averaged with a LABVIEWTM program running on the PC computer and displayed on the monitor in real time. We have implemented the pattern generator with XILINX FPGAs. Fig. 2 shows the block diagram of the pattern generator. We realized the logic control by a method of state machine design. Fig. 3 shows a small part of the state machine. The flexibility of the FPGAs enhances the capabilities of this system and allows measurement variations without hardware changes. The characterization of the nonvolatile memory transistor device under test (DUT), as function of programming voltage and time, is achieved by a high-speed, constant-current threshold voltage detection circuit. The analog detection circuit incorporating fast analog switches controlled digitally with the FPGAs. The schematic circuit diagram is shown in Fig. 4. The various operational modes for the DUT are realized with control signals applied to the analog switches (SW) as shown in Fig. 5. A LABVIEWTM program, on a PC platform, collects and processes the data. The data is displayed on the monitor in real time. This time-domain filtering reduces the digitizing error. Fig. 6 shows the data processing. SONOS nonvolatile semiconductor memories are characterized by erase/write, retention and endurance measurements. Fig. 7 shows the erase/write characteristics of an n-Channel, 5V prog-rammable SONOS memory transistor. Fig.8 shows the retention characteristic of the same SONOS transistor. We have used this system to characterize SONOS nonvolatile semiconductor memory transistors. The attractive features of the test system design lies in the cost-effectiveness and flexibility of the test pattern implementation, fast read-out of memory state, low power, high precision determination of the device threshold voltage, and perhaps most importantly, minimum disturbance, which is indispensable for nonvolatile memory characterization.
Yamano, Tetsuo; Shimizu, Mitsuru; Noda, Tsutomu
2005-07-01
We compared the results of the multiple-dose guinea pig maximization test (GPMT) and the non-radioactive murine local lymph-node assay (LLNA) for various biocides. Thirteen out of 17 positive biocides in the GPMT gave positive results in the LLNA. In the GPMT, the minimum first induction doses ranged over four orders (0.00005-0.5%), while elicitation-threshold doses, which were evaluated using an optimally sensitized group of animals in the multiple-dose studies, ranged over five orders (0.00006-2.8%). In the LLNA, minimum induction doses ranged over more than three orders (0.01-30%). With respect to 13 biocides that were positive in both the GPMT and the LLNA, results were quantitatively compared. When compared after conversion to corresponding area doses (microg/cm), the minimum doses required to elicit skin reaction in guinea pigs were always lower than that for induction in mice with all biocides. Correlation between minimum induction doses from the GPMT and the LLNA seemed poor (r=0.57), while that between minimum induction doses in the LLNA and elicitation-threshold doses in the GPMT was relatively good (r=0.73). The results suggest the possibility to estimate human elicitation-threshold doses, which are definitely lacking in the process of risk assessment for skin-sensitizers, from the data of the LLNA.
Nordanstig, J; Pettersson, M; Morgan, M; Falkenberg, M; Kumlien, C
2017-09-01
Patient reported outcomes are increasingly used to assess outcomes after peripheral arterial disease (PAD) interventions. VascuQoL-6 (VQ-6) is a PAD specific health-related quality of life (HRQoL) instrument for routine clinical practice and clinical research. This study assessed the minimum important difference for the VQ-6 and determined thresholds for the minimum important difference and substantial clinical benefit following PAD revascularisation. This was a population-based observational cohort study. VQ-6 data from the Swedvasc Registry (January 2014 to September 2016) was analysed for revascularised PAD patients. The minimum important difference was determined using a combination of a distribution based and an anchor-based method, while receiver operating characteristic curve analysis (ROC) was used to determine optimal thresholds for a substantial clinical benefit following revascularisation. A total of 3194 revascularised PAD patients with complete VQ-6 baseline recordings (intermittent claudication (IC) n = 1622 and critical limb ischaemia (CLI) n = 1572) were studied, of which 2996 had complete VQ-6 recordings 30 days and 1092 a year after the vascular intervention. The minimum important difference 1 year after revascularisation for IC patients ranged from 1.7 to 2.2 scale steps, depending on the method of analysis. Among CLI patients, the minimum important difference after 1 year was 1.9 scale steps. ROC analyses demonstrated that the VQ-6 discriminative properties for a substantial clinical benefit was excellent for IC patients (area under curve (AUC) 0.87, sensitivity 0.81, specificity 0.76) and acceptable in CLI (AUC 0.736, sensitivity 0.63, specificity 0.72). An optimal VQ-6 threshold for a substantial clinical benefit was determined at 3.5 scale steps among IC patients and 4.5 in CLI patients. The suggested thresholds for minimum important difference and substantial clinical benefit could be used when evaluating VQ-6 outcomes following different interventions in PAD and in the design of clinical trials. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Vocal warm-up increases phonation threshold pressure in soprano singers at high pitch.
Motel, Tamara; Fisher, Kimberly V; Leydon, Ciara
2003-06-01
Vocal warm-up is thought to optimize singing performance. We compared effects of short-term, submaximal, vocal warm-up exercise with those of vocal rest on the soprano voice (n = 10, ages 19-21 years). Dependent variables were the minimum subglottic air pressure required for vocal fold oscillation to occur (phonation threshold pressure, Pth), and the maximum and minimum phonation fundamental frequency. Warm-up increased Pth for high pitch phonation (p = 0.033), but not for comfortable (p = 0.297) or low (p = 0.087) pitch phonation. No significant difference in the maximum phonation frequency (p = 0.193) or minimum frequency (p = 0.222) was observed. An elevated Pth at controlled high pitch, but an unchanging maximum and minimum frequency production suggests that short-term vocal exercise may increase the viscosity of the vocal fold and thus serve to stabilize the high voice.
The human, primate and rabbit ultraviolet action spectra
NASA Technical Reports Server (NTRS)
Pitts, D. G.; Gibbons, W. D.
1972-01-01
A 5000 watt xenon-mercury high pressure lamp was used to produce a continuous ultraviolet spectrum. Human and animal exposures were made to establish the photokeratitis threshold and abiotic action spectrum. The lower limit of the abiotic action spectrum was 220 nm while the upper limit was 310 nm. The radiant exposure threshold at 270 nm was 0.005 watts/sq cm for the rabbit, 0.004 watts/sq cm for the primate, and 0.004 watts/ sq cm for the human. The rabbit curve was bi-peaked with minimums at 220 nm, 240 nm and 270 nm. The primate curve was tri-peaked with minimums at 220 nm, 240 nm and 270 nm. The human data showed a rather shallow curve with a minimum at 270 nm. Formulas and calculations are given to predict minimum exposure times for ocular damage to man in outer space, to establish valid safety criteria, and to establish protective design criteria.
Chen, Hongxia; Yang, Zaifu; Zou, Xianbiao; Wang, Jiarui; Zhu, Jianguo; Gu, Ying
2014-01-01
The purpose of this study was to explore the retinal injury thresholds in rabbits and evaluate the influence of retinal pigmentation on threshold irradiance at laser wavelengths of 532, 578, and 630 nm which might be involved in hypocrellin B (HB) and hematoporphyrin monomethyl ether (HMME) photodynamic therapy (PDT) for choroidal neovascularization (CNV). The eyes of pigmented and non-pigmented rabbits were exposed to 532, 578, and 630 nm lasers coupled to a slit lamp biological microscope. The exposure duration was 100 seconds and the retinal spot size was 2 mm throughout the experiment. The minimum visible lesions were detected by funduscopy at 1 and 24 hours post exposure. Bliss probit analysis was performed to determine the ED50 thresholds, fiducial limits and probit slope. In pigmented rabbits, the 24-hour retinal threshold irradiances at 532, 578, and 630 nm were 1,003, 1,475, and 1,720 mW/cm(2) , respectively. In non-pigmented rabbits, the 24-hour threshold irradiances were 1,657, 1,865, and 15,360 mW/cm(2) , respectively. The ED50 for 24-hour observation differed very little from the ED50 for 1-hour observation. The non-pigmented rabbits required a ninefold increase in threshold irradiance at 630 nm comparing to the pigmented rabbits. This study will contribute to the knowledge base for the limits of laser irradiance in application of HB or HMME PDT for CNV. © 2013 Wiley Periodicals, Inc.
Improving Nocturnal Fire Detection with the VIIRS Day-Night Band
NASA Technical Reports Server (NTRS)
Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.
2016-01-01
Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.
Development of a 32-channel ASIC for an X-ray APD detector onboard the ISS
NASA Astrophysics Data System (ADS)
Arimoto, Makoto; Harita, Shohei; Sugita, Satoshi; Yatsu, Yoichi; Kawai, Nobuyuki; Ikeda, Hirokazu; Tomida, Hiroshi; Isobe, Naoki; Ueno, Shiro; Mihara, Tatehiro; Serino, Motoko; Kohmura, Takayoshi; Sakamoto, Takanori; Yoshida, Atsumasa; Tsunemi, Hiroshi; Hatori, Satoshi; Kume, Kyo; Hasegawa, Takashi
2018-02-01
We report on the design and performance of a mixed-signal application specific integrated circuit (ASIC) dedicated to avalanche photodiodes (APDs) in order to detect hard X-ray emissions in a wide energy band onboard the International Space Station. To realize wide-band detection from 20 keV to 1 MeV, we use Ce:GAGG scintillators, each coupled to an APD, with low-noise front-end electronics capable of achieving a minimum energy detection threshold of 20 keV. The developed ASIC has the ability to read out 32-channel APD signals using 0.35 μm CMOS technology, and an analog amplifier at the input stage is designed to suppress the capacitive noise primarily arising from the large detector capacitance of the APDs. The ASIC achieves a performance of 2099 e- + 1.5 e-/pF at root mean square (RMS) with a wide 300 fC dynamic range. Coupling a reverse-type APD with a Ce:GAGG scintillator, we obtain an energy resolution of 6.7% (FWHM) at 662 keV and a minimum detectable energy of 20 keV at room temperature (20 °C). Furthermore, we examine the radiation tolerance for space applications by using a 90 MeV proton beam, confirming that the ASIC is free of single-event effects and can operate properly without serious degradation in analog and digital processing.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Portable multispectral fluorescence imaging system for food safety applications
NASA Astrophysics Data System (ADS)
Lefcourt, Alan M.; Kim, Moon S.; Chen, Yud-Ren
2004-03-01
Fluorescence can be a sensitive method for detecting food contaminants. Of particular interest is detection of fecal contamination as feces is the source of many pathogenic organisms. Feces generally contain chlorophyll a and related compounds due to ingestion of plant materials, and these compounds can readily be detected using fluorescence techniques. Described is a fluorescence-imaging system consisting primarily of a UV light source, an intensified camera with a six-position filter wheel, and software for controlling the system and automatically analyzing the resulting images. To validate the system, orchard apples artificially contaminated with dairy feces were used in a "hands-on" public demonstration. The contamination sites were easily identified using automated edge detection and threshold detection algorithms. In addition, by applying feces to apples and then washing sets of apples at hourly intervals, it was determined that five h was the minimum contact time that allowed identification of the contamination site after the apples were washed. There are many potential uses for this system, including studying the efficacy of apple washing systems.
Rosotti, Giovanni P; Juhasz, Attila; Booth, Richard A; Clarke, Cathie J
2016-07-01
We investigate the minimum planet mass that produces observable signatures in infrared scattered light and submillimetre (submm) continuum images and demonstrate how these images can be used to measure planet masses to within a factor of about 2. To this end, we perform multi-fluid gas and dust simulations of discs containing low-mass planets, generating simulated observations at 1.65, 10 and 850 μm. We show that the minimum planet mass that produces a detectable signature is ∼15 M ⊕ : this value is strongly dependent on disc temperature and changes slightly with wavelength (favouring the submm). We also confirm previous results that there is a minimum planet mass of ∼20 M ⊕ that produces a pressure maximum in the disc: only planets above this threshold mass generate a dust trap that can eventually create a hole in the submm dust. Below this mass, planets produce annular enhancements in dust outwards of the planet and a reduction in the vicinity of the planet. These features are in steady state and can be understood in terms of variations in the dust radial velocity, imposed by the perturbed gas pressure radial profile, analogous to a traffic jam. We also show how planet masses can be derived from structure in scattered light and submm images. We emphasize that simulations with dust need to be run over thousands of planetary orbits so as to allow the gas profile to achieve a steady state and caution against the estimation of planet masses using gas-only simulations.
NASA Astrophysics Data System (ADS)
Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke
2016-04-01
Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.
Detection of sub micro Gray dose levels using OSL phosphor LiMgPO4:Tb,B
NASA Astrophysics Data System (ADS)
Rawat, N. S.; Dhabekar, Bhushan; Muthe, K. P.; Koul, D. K.; Datta, D.
2017-04-01
Detection of sub micro Gray doses finds application in personnel and environmental monitoring, and nuclear forensics. Recently developed LiMgPO4:Tb,B (LMP) is highly sensitive Optically Stimulated Luminescence (OSL) phosphor with excellent dosimetric properties. The OSL emission spectrum of LMP consists of several peaks attributed to characteristic Tb3+ emission. The OSL emission peak at 380 nm is favorable for bi-alkali PMT used in RISO reader system. It is demonstrated that significant improvement in dose detection threshold can be realized for LMP by optimization of continuous wave (CW-) OSL parameters like stimulation intensity and readout time. The minimum measurable dose (MMD) as low as 0.49 μGy in readout time of less than 1 s at stimulation intensity of 32 mW/cm2 has been achieved using this phosphor. The recommendations for choice of parameters for personnel and environmental monitoring are also discussed.
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Lucas, Victoria S; McDonald, Fraser; Andiappan, Manoharan; Roberts, Graham
2017-05-01
The purpose of this study was to explore the applicability of periodontal ligament visibility (PLV) at the 18-year threshold. This mandibular maturity marker is graded into four separate age related stages, PLV-A, PLV-B, PLV-C, and PLV-D. These are discernible on a dental panoramic tomograph (DPT). The sample comprised a total of 2000 DPTs evenly divided into half yearly age bands from 16.00 to 25.99 years with 50 females and 50 males in each age band. It was found that PLV-A and PLV-B had minimum values below the 18-year threshold. PLV-C and PLV-D in females had minimum values of 18.08 and 18.58 years, respectively. In males, the minimum values for PLV-C was 18.10 years and PLV-D was 18.67 years. It was concluded that the presence of PLV-C or PLV-D indicates that a subject is over 18 years with a very high level of probability.
Wang, Chao; Guo, Xiao-Jing; Xu, Jin-Fang; Wu, Cheng; Sun, Ya-Lin; Ye, Xiao-Fei; Qian, Wei; Ma, Xiu-Qiang; Du, Wen-Min; He, Jia
2012-01-01
The detection of signals of adverse drug events (ADEs) has increased because of the use of data mining algorithms in spontaneous reporting systems (SRSs). However, different data mining algorithms have different traits and conditions for application. The objective of our study was to explore the application of association rule (AR) mining in ADE signal detection and to compare its performance with that of other algorithms. Monte Carlo simulation was applied to generate drug-ADE reports randomly according to the characteristics of SRS datasets. Thousand simulated datasets were mined by AR and other algorithms. On average, 108,337 reports were generated by the Monte Carlo simulation. Based on the predefined criterion that 10% of the drug-ADE combinations were true signals, with RR equaling to 10, 4.9, 1.5, and 1.2, AR detected, on average, 284 suspected associations with a minimum support of 3 and a minimum lift of 1.2. The area under the receiver operating characteristic (ROC) curve of the AR was 0.788, which was equivalent to that shown for other algorithms. Additionally, AR was applied to reports submitted to the Shanghai SRS in 2009. Five hundred seventy combinations were detected using AR from 24,297 SRS reports, and they were compared with recognized ADEs identified by clinical experts and various other sources. AR appears to be an effective method for ADE signal detection, both in simulated and real SRS datasets. The limitations of this method exposed in our study, i.e., a non-uniform thresholds setting and redundant rules, require further research.
Wensing, M; Penninks, A H; Hefle, S L; Akkerdaas, J H; van Ree, R; Koppelman, S J; Bruijnzeel-Koomen, C A F M; Knulst, A C
2002-12-01
The risk for allergic reactions depends on the sensitivity of individuals and the quantities of offending food ingested. The sensitivity varies among allergic individuals, as does the threshold dose of a food allergen capable of inducing an allergic reaction. This study aimed at determining the distribution of minimum provoking doses of hazelnut in a hazelnut-allergic population. Thirty-one patients with a history of hazelnut-related allergic symptoms, a positive skin prick test to hazelnut and/or an elevated specific IgE level, were included. Double-blind, placebo-controlled food challenges (DBPCFC) were performed with seven increasing doses of dried hazelnut (1 mg to 1 g hazelnut protein) randomly interspersed with seven placebo doses. Twenty-nine patients had a positive challenge. Itching of the oral cavity and/or lips was the first symptom in all cases. Additional gastrointestinal symptoms were reported in five patients and difficulty in swallowing in one patient. Lip swelling was observed in two patients, followed by generalized urticaria in one of these. Threshold doses for eliciting subjective reactions varied from a dose of 1 mg up to 100 mg hazelnut protein (equivalent to 6.4-640 mg hazelnut meal). Extrapolation of the dose-response curve showed that 50% of our hazelnut-allergic population will suffer from an allergic reaction after ingestion of 6 mg (95% CI, 2-11 mg) of hazelnut protein. Objective symptoms were observed in two patients after 1 and 1,000 mg, respectively. DBPCFCs demonstrated threshold doses in half of the hazelnut-allergic patients similar to doses previously described to be hidden in consumer products. This stresses the need for careful labelling and strategies to prevent and detect contamination of food products with hazelnut residues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashton, Neil K.; Liss, Stephanie A.; Walker, Ricardo W.
Telemetry studies are often used to investigate sturgeon habitat use and movement patterns; however, existing acoustic transmitters are generally too large to implant into age-0 sturgeon without harming the fish. Recent development of a miniaturized acoustic transmitter (cylindrical, 0.7 g in air, 24.2 mm long, 5.0 mm diameter) with up to 365 d battery life has the potential to advance our understanding of age-0 sturgeon ecology in rivers and lakes. Prior to use in field studies, it is essential to conduct experiments evaluating potential adverse transmitter effects on fish. We tested transmitter retention, fish survival, and growth of a broadmore » size range of age-0 white sturgeon (Acipenser transmontanus; 158–277 mm fork length; 26–126 g; 0.6–2.6% transmitter burden) in an 84 d laboratory study, with an ultimate goal of determining a minimum size threshold of sturgeon that can be implanted with this acoustic transmitter. At 84 d post-implantation, transmitter retention and fish survival were 100%. Specific growth rates were reduced at 7 and 14 d post-implantation, resulting in minimum fork length thresholds of 250 and 171 mm, respectively. Juveniles implanted with transmitters regained their growth potential by 28 d post-implantation and no size differences were detected in comparisons with unmarked control fish. This study demonstrates the ability to implant small age-0 sturgeon with high transmitter retention and fish survival, and only minor growth effects. Use of new miniaturized acoustic transmitters may give researchers a means to address questions about young-of-the-year fish recruitment, ecological patterns, and potentially advance conservation management of sturgeon populations.« less
NASA Astrophysics Data System (ADS)
Rodrigo, F. S.; Gómez-Navarro, J. J.; Montávez Gómez, J. P.
2011-07-01
In this work, a reconstruction of climatic conditions in Andalusia (southern Iberia Peninsula) during the period 1701-1850, as well as an evaluation of its associated uncertainties, is presented. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction is based on the analysis of a wide variety of documentary data. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the seasonal means of climate variables. This reconstruction methodology is tested within the pseudoreality of a high-resolution paleoclimate simulation performed with the regional climate model MM5 coupled to the global model ECHO-G. Results show that the reconstructions are influenced by the reference period chosen and the threshold values used to define extreme values. This creates uncertainties which are assesed within the context of the climate simulation. An ensemble of reconstructions was obtained using two different reference periods and two pairs of percentiles as threshold values. Results correspond to winter temperature, and winter, spring, and autumn rainfall, and they are compared with simulations of the climate model for the considered period. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of dry and warm (wet and cold) winters was lesser (higher) than during the reference period. In spring and autumn it was detected an increase (decrease) in the frequency of wet (dry) seasons. Future research challenges are outlined.
NASA Astrophysics Data System (ADS)
Melo, D.; Yelós, L. D.; Garcia, B.; Rovero, A. C.
2017-10-01
Gamma-ray astronomy opened the universe of the more energetic electromagnetic radiation using ground and orbiting instruments, which provide information for the understanding of sources of different types. Ground-based telescope arrays use Cherenkov light produced by the charged particles from extensive air showers generated in the Earth's atmosphere to identify gamma rays. This imposes a minimum energy threshold on the gamma rays to be detected. Towards the high-energy end of the spectrum, however, the amount of Cherenkov radiation produced by a gamma-ray photon guarantees its detectability, the limiting factor being the low flux of the sources. For this reason, the detection strategy consists in using arrays of small telescopes. In this work, we investigate the feasibility of detecting gamma-ray cascades using Cherenkov telescopes, in the range of 100 GeV to 2 TeV, at the CASLEO site, characterizing the response of a system of three Cherenkov telescopes.
Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J
2010-01-01
Auditory Brainstem Responses (ABRs) are used as objective method for diagnostics and quantification of hearing loss. Many methods for automatic recognition of ABRs have been developed, but none of them include the individual measurement setup in the analysis. The purpose of this work was to design a fast recognition scheme for chirp-evoked ABRs that is adjusted to the individual measurement condition using spontaneous electroencephalographic activity (SA). For the classification, the kernel-based novelty detection scheme used features based on the inter-sweep instantaneous phase synchronization as well as energy and entropy relations in the time-frequency domain. This method provided SA discrimination from stimulations above the hearing threshold with a minimum number of sweeps, i.e., 200 individual responses. It is concluded that the proposed paradigm, processing procedures and stimulation techniques improve the detection of ABRs in terms of the degree of objectivity, i.e., automation of procedure, and measurement time.
Thresholds of Transient Cavitation Produced by Pulsed Ultrasound in a Controlled Nuclei Environment.
NASA Astrophysics Data System (ADS)
Holland, Christy Katherine Smith
The possibility of hazardous bioeffects from medical ultrasound examinations and therapy, although not demonstrated in current epidemiologic data, is still of interest to the medical community. In particular, concern persists over the potential of damage at the cellular level due to transient cavitation produced by diagnostic and high intensity therapeutic ultrasound. Transient cavitation is a discrete phenomenon which relies on the existence of stabilized nuclei, or pockets of gas within a host fluid, for its genesis. A convenient descriptor for assessing the likelihood of transient cavitation is the threshold pressure, or the minimum acoustic pressure necessary to initiate bubble growth and subsequent collapse. Experimental measurements of cavitation thresholds are presented here which elucidate the importance of ultrasound host fluid and nuclei parameters in determining these thresholds. These results are interpreted in the context of an approximate theory, included as an appendix, describing the relationship between these parameters and cavitation threshold pressures. An automated experimental apparatus has been developed to determine thresholds for cavitation produced in a fluid by short tone bursts of ultrasound at 0.76, 0.99, and 2.30 MHz. A fluid jet was used to convect potential cavitation nuclei through the focal region of the insonifying transducer. Potential nuclei tested include 1mum polystyrene spheres, microbubbles in the 1-10 μm range that are stabilized with human serum albumin, and whole blood constituents. Cavitation was detected by a passive acoustical technique which is sensitive to sound scattered from cavitation bubbles. Measurements of the transient cavitation threshold in water, in a fluid of higher viscosity, and in diluted whole blood are presented. Results from these experiments which permit the control of nuclei and host fluid properties are compared to the approximate analytical theory for the prediction of the onset of cavitation.
Rohatensky, Mitchell G; Livingstone, Devon M; Mintchev, Paul; Barnes, Heather K; Nakoneshny, Steven C; Demetrick, Douglas J; Dort, Joseph C; van Marle, Guido
2018-02-08
Oropharyngeal Squamous Cell Carcinoma (OPSCC) is increasing in incidence despite a decline in traditional risk factors. Human Papilloma Virus (HPV), specifically subtypes 16, 18, 31 and 35, has been implicated as the high-risk etiologic agent. HPV positive cancers have a significantly better prognosis than HPV negative cancers of comparable stage, and may benefit from different treatment regimens. Currently, HPV related carcinogenesis is established indirectly through Immunohistochemistry (IHC) staining for p16, a tumour suppressor gene, or polymerase chain reaction (PCR) that directly tests for HPV DNA in biopsied tissue. Loop mediated isothermal amplification (LAMP) is more accurate than IHC, more rapid than PCR and is significantly less costly. In previous work we showed that a subtype specific HPV LAMP assay performed similar to PCR on purified DNA. In this study we examined the performance of this LAMP assay without DNA purification. We used LAMP assays using established primers for HPV 16 and 18, and new primers for HPV 31 and 35. LAMP reaction conditions were tested on serial dilutions of plasmid HPV DNA to confirm minimum viral copy number detection thresholds. LAMP was then performed directly on different human cell line samples without DNA purification. Our LAMP assays could detect 10 5 , 10 3 , 10 4 , and 10 5 copies of plasmid DNA for HPV 16, 18, 31, and 35, respectively. All primer sets were subtype specific, with no cross-amplification. Our LAMP assays also reliably amplified subtype specific HPV DNA from samples without requiring DNA isolation and purification. The high risk OPSCC HPV subtype specific LAMP primer sets demonstrated, excellent clinically relevant, minimum copy number detection thresholds with an easy readout system. Amplification directly from samples without purification illustrated the robust nature of the assay, and the primers used. This lends further support HPV type specific LAMP assays, and these specific primer sets and assays can be further developed to test for HPV in OPSCC in resource and lab limited settings, or even bedside testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orton, Elizabeth J., E-mail: eorton@physics.carleton.ca; Kemp, Robert A. de; Glenn Wells, R.
2014-10-15
Purpose: Myocardial perfusion imaging (MPI) is used for diagnosis and prognosis of coronary artery disease. When MPI studies are performed with positron emission tomography (PET) and the radioactive tracer rubidium-82 chloride ({sup 82}Rb), a small but non-negligible fraction of studies (∼10%) suffer from extracardiac interference: high levels of tracer uptake in structures adjacent to the heart which mask the true cardiac tracer uptake. At present, there are no clinically available options for automated detection or correction of this problem. This work presents an algorithm that detects and classifies the severity of extracardiac interference in {sup 82}Rb PET MPI images andmore » reports the accuracy and failure rate of the method. Methods: A set of 200 {sup 82}Rb PET MPI images were reviewed by a trained nuclear cardiologist and interference severity reported on a four-class scale, from absent to severe. An automated algorithm was developed that compares uptake at the external border of the myocardium to three thresholds, separating the four interference severity classes. A minimum area of interference was required, and the search region was limited to that facing the stomach wall and spleen. Maximizing concordance (Cohen’s Kappa) and minimizing failure rate for the set of 200 clinician-read images were used to find the optimal population-based constants defining search limit and minimum area parameters and the thresholds for the algorithm. Tenfold stratified cross-validation was used to find optimal thresholds and report accuracy measures (sensitivity, specificity, and Kappa). Results: The algorithm was capable of detecting interference with a mean [95% confidence interval] sensitivity/specificity/Kappa of 0.97 [0.94, 1.00]/0.82 [0.66, 0.98]/0.79 [0.65, 0.92], and a failure rate of 1.0% ± 0.2%. The four-class overall Kappa was 0.72 [0.64, 0.81]. Separation of mild versus moderate-or-greater interference was performed with good accuracy (sensitivity/specificity/Kappa = 0.92 [0.86, 0.99]/0.86 [0.71, 1.00]/0.78 [0.64, 0.92]), while separation of moderate versus severe interference severity classes showed reduced sensitivity/Kappa but little change in specificity (sensitivity/specificity/Kappa = 0.83 [0.77, 0.88]/0.82 [0.77, 0.88]/0.65 [0.60, 0.70]). Specificity was greater than sensitivity for all interference classes. Algorithm execution time was <1 min. Conclusions: The algorithm produced here has a low failure rate and high accuracy for detection of extracardiac interference in {sup 82}Rb PET MPI scans. It provides a fast, reliable, automated method for assessing severity of extracardiac interference.« less
Nanosecond pulse lasers for retinal applications.
Wood, John P M; Plunkett, Malcolm; Previn, Victor; Chidlow, Glyn; Casson, Robert J
2011-08-01
Thermal lasers are routinely used to treat certain retinal disorders although they cause collateral damage to photoreceptors. The current study evaluated a confined, non-conductive thermal, 3-nanosecond pulse laser in order to determine how to produce the greatest therapeutic range without causing collateral damage. Data were compared with that obtained from a standard thermal laser. Porcine ocular explants were used; apposed neuroretina was also in place for actual laser treatment. After treatment, the retina was removed and a calcein-AM assay was used to assess retinal pigmented epithelium (RPE) cell viability in the explants. Histological methods were also employed to examine lased transverse explant sections. Three nanoseconds pulse lasers with either speckle- or gaussian-beam profile were employed in the study. Comparisons were made with a 100 milliseconds continuous wave (CW) 532 nm laser. The therapeutic energy range ratio was defined as the minimum visible effect threshold (VET) versus the minimum detectable RPE kill threshold. The 3-nanosecond lasers produced markedly lower minimum RPE kill threshold levels than the CW laser (e.g., 36 mJ/cm(2) for speckle-beam and 89 mJ/cm(2) for gaussian-beam profile nanosecond lasers vs. 7,958 mJ/cm(2) for CW laser). VET values were also correspondingly lower for the nanosecond lasers (130 mJ/cm(2) for 3 nanoseconds speckle-beam and 219 mJ/cm(2) for gaussian-beam profile vs. 1,0346 mJ/cm(2) for CW laser). Thus, the therapeutic range ratios obtained with the nanosecond lasers were much more favorable than that obtained by the CW laser: 3.6:1 for the speckle-beam and 2.5:1 for the gaussian-beam profile 3-nanosecond lasers versus 1.3:1 for the CW laser. Nanosecond lasers, particularly with a speckle-beam profile, provide a much wider therapeutic range of energies over which RPE treatment can be performed, without damage to the apposed retina, as compared with conventional CW lasers. These results may have important implications for the treatment of retinal disease. Copyright © 2011 Wiley-Liss, Inc.
A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Bilen, Canan; Sterling, Jerry
2003-01-01
Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.
Wang, Xin; Jen, Philip H-S; Wu, Fei-Jian; Chen, Qi-Cai
2007-09-05
In acoustic communication, animals must extract biologically relevant signals that are embedded in noisy environment. The present study examines how weak noise may affect the auditory sensitivity of neurons in the central nucleus of the mouse inferior colliculus (IC) which receives convergent excitatory and inhibitory inputs from both lower and higher auditory centers. Specifically, we studied the frequency sensitivity and minimum threshold of IC neurons using a pure tone probe and a weak white noise masker under forward masking paradigm. For most IC neurons, probe-elicited response was decreased by a weak white noise that was presented at a specific gap (i.e. time window). When presented within this time window, weak noise masking sharpened the frequency tuning curve and increased the minimum threshold of IC neurons. The degree of weak noise masking of these two measurements increased with noise duration. Sharpening of the frequency tuning curve and increasing of the minimum threshold of IC neurons during weak noise masking were mostly mediated through GABAergic inhibition. In addition, sharpening of frequency tuning curve by the weak noise masker was more effective at the high than at low frequency limb. These data indicate that in the real world the ambient noise may improve frequency sensitivity of IC neurons through GABAergic inhibition while inevitably decrease the frequency response range and sensitivity of IC neurons.
Tailoring automatic exposure control toward constant detectability in digital mammography.
Salvagnini, Elena; Bosmans, Hilde; Struelens, Lara; Marshall, Nicholas W
2015-07-01
The automatic exposure control (AEC) modes of most full field digital mammography (FFDM) systems are set up to hold pixel value (PV) constant as breast thickness changes. This paper proposes an alternative AEC mode, set up to maintain some minimum detectability level, with the ultimate goal of improving object detectability at larger breast thicknesses. The default "opdose" AEC mode of a Siemens MAMMOMAT Inspiration FFDM system was assessed using poly(methyl methacrylate) (PMMA) of thickness 20, 30, 40, 50, 60, and 70 mm to find the tube voltage and anode/filter combination programmed for each thickness; these beam quality settings were used for the modified AEC mode. Detectability index (d'), in terms of a non-prewhitened model observer with eye filter, was then calculated as a function of tube current-time product (mAs) for each thickness. A modified AEC could then be designed in which detectability never fell below some minimum setting for any thickness in the operating range. In this study, the value was chosen such that the system met the achievable threshold gold thickness (Tt) in the European guidelines for the 0.1 mm diameter disc (i.e., Tt ≤ 1.10 μm gold). The default and modified AEC modes were compared in terms of contrast-detail performance (Tt), calculated detectability (d'), signal-difference-to-noise ratio (SDNR), and mean glandular dose (MGD). The influence of a structured background on object detectability for both AEC modes was examined using a CIRS BR3D phantom. Computer-based CDMAM reading was used for the homogeneous case, while the images with the BR3D background were scored by human observers. The default opdose AEC mode maintained PV constant as PMMA thickness increased, leading to a reduction in SDNR for the homogeneous background 39% and d' 37% in going from 20 to 70 mm; introduction of the structured BR3D plate changed these figures to 22% (SDNR) and 6% (d'), respectively. Threshold gold thickness (0.1 mm diameter disc) for the default AEC mode in the homogeneous background increased by 62% in going from 20 to 70 mm PMMA thickness; in the structured background, the increase was 39%. Implementation of the modified mode entailed an increase in mAs at PMMA thicknesses >40 mm; the modified AEC held threshold gold thickness constant above 40 mm PMMA with a maximum deviation of 5% in the homogeneous background and 3% in structured background. SDNR was also held constant with a maximum deviation of 4% and 2% for the homogeneous and the structured background, respectively. These results were obtained with an increase of MGD between 15% and 73% going from 40 to 70 mm PMMA thickness. This work has proposed and implemented a modified AEC mode, tailored toward constant detectability at larger breast thickness, i.e., above 40 mm PMMA equivalent. The desired improvement in object detectability could be obtained while maintaining MGD within the European guidelines achievable dose limit. (A study designed to verify the performance of the modified mode using more clinically realistic data is currently underway.).
Changes in heat waves indices in Romania over the period 1961-2015
NASA Astrophysics Data System (ADS)
Croitoru, Adina-Eliza; Piticar, Adrian; Ciupertea, Antoniu-Flavius; Roşca, Cristina Florina
2016-11-01
In the last two decades many climate change studies have focused on extreme temperatures as they have a significant impact on environment and society. Among the weather events generated by extreme temperatures, heat waves are some of the most harmful. The main objective of this study was to detect and analyze changes in heat waves in Romania based on daily observation data (maximum and minimum temperature) over the extended summer period (May-Sept) using a set of 10 indices and to explore the spatial patterns of changes. Heat wave data series were derived from daily maximum and minimum temperature data sets recorded in 29 weather stations across Romania over a 55-year period (1961-2015). In this study, the threshold chosen was the 90th percentile calculated based on a 15-day window centered on each calendar day, and for three baseline periods (1961-1990, 1971-2000, and 1981-2010). Two heat wave definitions were considered: at least three consecutive days when maximum temperature exceeds 90th percentile, and at least three consecutive days when minimum temperature exceeds 90th percentile. For each of them, five variables were calculated: amplitude, magnitude, number of events, duration, and frequency. Finally, 10 indices resulted for further analysis. The main results are: most of the indices have statistically significant increasing trends; only one index for one weather station indicated statistically significant decreasing trend; the changes are more intense in case of heat waves detected based on maximum temperature compared to those obtained for heat waves identified based on minimum temperature; western and central regions of Romania are the most exposed to increasing heat waves.
Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K
2018-07-01
Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.
Abboud, Tammam; Schaper, Miriam; Dührsen, Lasse; Schwarz, Cindy; Schmidt, Nils Ole; Westphal, Manfred; Martens, Tobias
2016-10-01
OBJECTIVE Warning criteria for monitoring of motor evoked potentials (MEP) after direct cortical stimulation during surgery for supratentorial tumors have been well described. However, little is known about the value of MEP after transcranial electrical stimulation (TES) in predicting postoperative motor deficit when monitoring threshold level. The authors aimed to evaluate the feasibility and value of this method in glioma surgery by using a new approach for interpreting changes in threshold level involving contra- and ipsilateral MEP. METHODS Between November 2013 and December 2014, 93 patients underwent TES-MEP monitoring during resection of gliomas located close to central motor pathways but not involving the primary motor cortex. The MEP were elicited by transcranial repetitive anodal train stimulation. Bilateral MEP were continuously evaluated to assess percentage increase of threshold level (minimum voltage needed to evoke a stable motor response from each of the muscles being monitored) from the baseline set before dural opening. An increase in threshold level on the contralateral side (facial, arm, or leg muscles contralateral to the affected hemisphere) of more than 20% beyond the percentage increase on the ipsilateral side (facial, arm, or leg muscles ipsilateral to the affected hemisphere) was considered a significant alteration. Recorded alterations were subsequently correlated with postoperative neurological deterioration and MRI findings. RESULTS TES-MEP could be elicited in all patients, including those with recurrent glioma (31 patients) and preoperative paresis (20 patients). Five of 73 patients without preoperative paresis showed a significant increase in threshold level, and all of them developed new paresis postoperatively (transient in 4 patients and permanent in 1 patient). Eight of 20 patients with preoperative paresis showed a significant increase in threshold level, and all of them developed postoperative neurological deterioration (transient in 4 patients and permanent in 4 patients). In 80 patients no significant change in threshold level was detected, and none of them showed postoperative neurological deterioration. The specificity and sensitivity in this series were estimated at 100%. Postoperative MRI revealed gross-total tumor resection in 56 of 82 patients (68%) in whom complete tumor resection was attainable; territorial ischemia was detected in 4 patients. CONCLUSIONS The novel threshold criterion has made TES-MEP a useful method for predicting postoperative motor deficit in patients who undergo glioma surgery, and has been feasible in patients with preoperative paresis as well as in patients with recurrent glioma. Including contra- and ipsilateral changes in threshold level has led to a high sensitivity and specificity.
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
20 CFR 404.1641 - Standards of performance.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (a) General. The performance standards include both a target level of performance and a threshold level of performance for the State agency. The target level represents a level of performance that we and the States will work to attain in the future. The threshold level is the minimum acceptable level...
20 CFR 416.1041 - Standards of performance.
Code of Federal Regulations, 2010 CFR
2010-04-01
... performance. (a) General. The performance standards include both a target level of performance and a threshold level of performance for the State agency. The target level represents a level of performance that we and the States will work to attain in the future. The threshold level is the minimum acceptable level...
75 FR 66188 - Sentencing Guidelines for United States Courts
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
...) by striking ``five kilograms of marihuana'' and inserting ``2 grams of cocaine base''; by inserting... required to trigger the 5-year mandatory minimum term of imprisonment was increased from 5 grams to 28 grams, and the quantity threshold required to trigger the 10-year mandatory minimum term of imprisonment...
Predicting the performance of local seismic networks using Matlab and Google Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chael, Eric Paul
2009-11-01
We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number ofmore » stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.« less
A low-noise CMOS pixel direct charge sensor, Topmetal-II-
An, Mangmang; Chen, Chufeng; Gao, Chaosong; ...
2015-12-12
In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less
A low-noise CMOS pixel direct charge sensor, Topmetal-II-
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Mangmang; Chen, Chufeng; Gao, Chaosong
In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2009-06-02
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2010-08-24
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
Threshold Monitoring Maps for Under-Water Explosions
NASA Astrophysics Data System (ADS)
Arora, N. S.
2014-12-01
Hydro-acoustic energy in the 1-100 Hz range from under-water explosions can easily spread for thousands of miles due to the unique properties of the deep sound channel. This channel, aka SOFAR channel, exists almost everywhere in the earth's oceans where the water has at least 1500m depth. Once the energy is trapped in this channel it spreads out cylindrically, and hence experiences very little loss, as long as there is an unblocked path from source to receiver. Other losses such as absorption due to chemicals in the ocean (mainly boric acid and magnesium sulphate) are also quite minimal at these low frequencies. It is not surprising then that the International Monitoring System (IMS) maintains a global network of hydrophone stations listening on this particular frequency range. The overall objective of our work is to build a probabilistic model to detect and locate under-water explosions using the IMS network. A number of critical pieces for this model, such as travel time predictions, are already well known. We are extending the existing knowledge-base by building the remaining pieces, most crucially the models for transmission losses and detection probabilities. With a complete model for detecting under-water explosions we are able to combine it with our existing model for seismic events, NET-VISA. In the conference we will present threshold monitoring maps for explosions in the earth's oceans. Our premise is that explosive sources release an unknown fraction of their total energy into the SOFAR channel, and this trapped energy determines their detection probability at each of the IMS hydrophone stations. Our threshold monitoring maps compute the minimum amount of energy at each location that must be released into the deep sound channel such that there is a ninety percent probability that at least two of the IMS stations detect the event. We will also present results of our effort to detect and locate hydro-acoustic events. In particular, we will show results from a recent under-water volcanic eruption at the Ahyl Seamount (April-May 2014), and compare our work with the current processing, both automated and human, at the IDC.
Research and development on performance models of thermal imaging systems
NASA Astrophysics Data System (ADS)
Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan
2009-07-01
Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvagnini, Elena, E-mail: elena.salvagnini@uzleuven.be; Bosmans, Hilde; Struelens, Lara
Purpose: The automatic exposure control (AEC) modes of most full field digital mammography (FFDM) systems are set up to hold pixel value (PV) constant as breast thickness changes. This paper proposes an alternative AEC mode, set up to maintain some minimum detectability level, with the ultimate goal of improving object detectability at larger breast thicknesses. Methods: The default “OPDOSE” AEC mode of a Siemens MAMMOMAT Inspiration FFDM system was assessed using poly(methyl methacrylate) (PMMA) of thickness 20, 30, 40, 50, 60, and 70 mm to find the tube voltage and anode/filter combination programmed for each thickness; these beam quality settingsmore » were used for the modified AEC mode. Detectability index (d′), in terms of a non-prewhitened model observer with eye filter, was then calculated as a function of tube current-time product (mAs) for each thickness. A modified AEC could then be designed in which detectability never fell below some minimum setting for any thickness in the operating range. In this study, the value was chosen such that the system met the achievable threshold gold thickness (T{sub t}) in the European guidelines for the 0.1 mm diameter disc (i.e., T{sub t} ≤ 1.10 μm gold). The default and modified AEC modes were compared in terms of contrast-detail performance (T{sub t}), calculated detectability (d′), signal-difference-to-noise ratio (SDNR), and mean glandular dose (MGD). The influence of a structured background on object detectability for both AEC modes was examined using a CIRS BR3D phantom. Computer-based CDMAM reading was used for the homogeneous case, while the images with the BR3D background were scored by human observers. Results: The default OPDOSE AEC mode maintained PV constant as PMMA thickness increased, leading to a reduction in SDNR for the homogeneous background 39% and d′ 37% in going from 20 to 70 mm; introduction of the structured BR3D plate changed these figures to 22% (SDNR) and 6% (d′), respectively. Threshold gold thickness (0.1 mm diameter disc) for the default AEC mode in the homogeneous background increased by 62% in going from 20 to 70 mm PMMA thickness; in the structured background, the increase was 39%. Implementation of the modified mode entailed an increase in mAs at PMMA thicknesses >40 mm; the modified AEC held threshold gold thickness constant above 40 mm PMMA with a maximum deviation of 5% in the homogeneous background and 3% in structured background. SDNR was also held constant with a maximum deviation of 4% and 2% for the homogeneous and the structured background, respectively. These results were obtained with an increase of MGD between 15% and 73% going from 40 to 70 mm PMMA thickness. Conclusions: This work has proposed and implemented a modified AEC mode, tailored toward constant detectability at larger breast thickness, i.e., above 40 mm PMMA equivalent. The desired improvement in object detectability could be obtained while maintaining MGD within the European guidelines achievable dose limit. (A study designed to verify the performance of the modified mode using more clinically realistic data is currently underway.)« less
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate Effectiveness of Proposed Rule... designated threshold.\\5\\ In addition, the Exchange adopted security-type specific parameter values, such as..., Threshold Away Amount, Minimum Duration and N mult , will be made through proposed fee filings pursuant to...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-09
... methods: Electronic Submission: Submit all electronic public comments via the Federal e-Rulemaking Portal http://www.regulations.gov . To submit comments via the e-Rulemaking Portal, first click the ``submit a...-frame. The overfished threshold would also be revised. The overfished threshold or minimum stock size...
Ab initio molecular dynamics simulations of low energy recoil events in MgO
NASA Astrophysics Data System (ADS)
Petersen, B. A.; Liu, B.; Weber, W. J.; Zhang, Y.
2017-04-01
Low-energy recoil events in MgO are studied using ab intio molecular dynamics simulations to reveal the dynamic displacement processes and final defect configurations. Threshold displacement energies, Ed, are obtained for Mg and O along three low-index crystallographic directions, [100], [110], and [111]. The minimum values for Ed are found along the [110] direction consisting of the same element, either Mg or O atoms. Minimum threshold values of 29.5 eV for Mg and 25.5 eV for O, respectively, are suggested from the calculations. For other directions, the threshold energies are considerably higher, 65.5 and 150.0 eV for O along [111] and [100], and 122.5 eV for Mg along both [111] and [100] directions, respectively. These results show that the recoil events in MgO are partial-charge transfer assisted processes where the charge transfer plays an important role. There is a similar trend found in other oxide materials, where the threshold displacement energy correlates linearly with the peak partial-charge transfer, suggesting this behavior might be universal in ceramic oxides.
Mahesh, P A; Jayaraj, B S; Prabhakar, A K; Chaya, S K; Vijaysimha, R
2013-01-01
Exposure to air pollution due to combustion of biomass fuels remains one of the significant risk factors for chronic respiratory diseases such as chronic bronchitis. There is a need to identify the minimum threshold level of biomass index that is significantly associated with chronic bronchitis. This study was undertaken to identify a threshold for biomass exposure index in a rural women population in Mysore district, south India. A cross-sectional survey was conducted in a representative population of Mysore and Nanjangud taluks. Eight villages each from Mysore and Nanjangud were randomly selected based on the list of villages from census 2001. A house-to-house survey was carried out by trained field workers using the Burden of Obstructive Diseases questionnaire, which evaluated the biomass smoke exposure and chronic bronchitis. All the women aged above 30 yr were included in the study. A total of 2011 women from Mysore and 1942 women from Nanjangud participated in the study. All women were non-smoking and used biomass fuels as the primary fuel for cooking. A threshold of biomass fuel exposure of 60 was identified on multivariate analysis in Mysore district after adjusting for age, passive smoking and working in a occupational exposure to dust, as the minimum required for a significant association with chronic bronchitis. One in every 20 women in Mysore district exposed to biomass fuel exposure index of 110 or more developed chronic bronchitis. The minimum threshold of biomass exposure index of 60 is necessary to have a significant risk of developing chronic bronchitis in women. The number needed to harm to develop chronic bronchitis reduces with increasing biomass exposure index and women residing in rural Nanjangud have a higher risk for developing chronic bronchitis as compared to women in Mysore.
Palmer, Shannon B; Musiek, Frank E
2014-01-01
Temporal processing ability has been linked to speech understanding ability and older adults often complain of difficulty understanding speech in difficult listening situations. Temporal processing can be evaluated using gap detection procedures. There is some research showing that gap detection can be evaluated using an electrophysiological procedure. However, there is currently no research establishing gap detection threshold using the N1-P2 response. The purposes of the current study were to 1) determine gap detection thresholds in younger and older normal-hearing adults using an electrophysiological measure, 2) compare the electrophysiological gap detection threshold and behavioral gap detection threshold within each group, and 3) investigate the effect of age on each gap detection measure. This study utilized an older adult group and younger adult group to compare performance on an electrophysiological and behavioral gap detection procedure. The subjects in this study were 11 younger, normal-hearing adults (mean = 22 yrs) and 11 older, normal-hearing adults (mean = 64.36 yrs). All subjects completed an adaptive behavioral gap detection procedure in order to determine their behavioral gap detection threshold (BGDT). Subjects also completed an electrophysiologic gap detection procedure to determine their electrophysiologic gap detection threshold (EGDT). Older adults demonstrated significantly larger gap detection thresholds than the younger adults. However, EGDT and BGDT were not significantly different in either group. The mean difference between EGDT and BGDT for all subjects was 0.43 msec. Older adults show poorer gap detection ability when compared to younger adults. However, this study shows that gap detection thresholds can be measured using evoked potential recordings and yield results similar to a behavioral measure. American Academy of Audiology.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
Bhandiwad, Ashwin A.; Zeddies, David G.; Raible, David W.; Rubel, Edwin W.; Sisneros, Joseph A.
2013-01-01
SUMMARY Zebrafish (Danio rerio) have become a valuable model for investigating the molecular genetics and development of the inner ear in vertebrates. In this study, we employed a prepulse inhibition (PPI) paradigm to assess hearing in larval wild-type (AB) zebrafish during early development at 5–6 days post-fertilization (d.p.f.). We measured the PPI of the acoustic startle response in zebrafish using a 1-dimensional shaker that simulated the particle motion component of sound along the fish's dorsoventral axis. The thresholds to startle-inducing stimuli were determined in 5–6 d.p.f. zebrafish, and their hearing sensitivity was then characterized using the thresholds of prepulse tone stimuli (90–1200 Hz) that inhibited the acoustic startle response to a reliable startle stimulus (820 Hz at 20 dB re. 1 m s−2). Hearing thresholds were defined as the minimum prepulse tone level required to significantly reduce the startle response probability compared with the baseline (no-prepulse) condition. Larval zebrafish showed greatest auditory sensitivity from 90 to 310 Hz with corresponding mean thresholds of −19 to −10 dB re. 1 m s−2, respectively. Hearing thresholds of prepulse tones were considerably lower than previously predicted by startle response assays. The PPI assay was also used to investigate the relative contribution of the lateral line to the detection of acoustic stimuli. After aminoglycoside-induced neuromast hair-cell ablation, we found no difference in PPI thresholds between treated and control fish. We propose that this PPI assay can be used to screen for novel zebrafish hearing mutants and to investigate the ontogeny of hearing in zebrafish and other fishes. PMID:23966590
Rainfall control of debris-flow triggering in the Réal Torrent, Southern French Prealps
NASA Astrophysics Data System (ADS)
Bel, Coraline; Liébault, Frédéric; Navratil, Oldrich; Eckert, Nicolas; Bellot, Hervé; Fontaine, Firmin; Laigle, Dominique
2017-08-01
This paper investigates the occurrence of debris flow due to rainfall forcing in the Réal Torrent, a very active debris flow-prone catchment in the Southern French Prealps. The study is supported by a 4-year record of flow responses and rainfall events, from three high-frequency monitoring stations equipped with geophones, flow stage sensors, digital cameras, and rain gauges measuring rainfall at 5-min intervals. The classic method of rainfall intensity-duration (ID) threshold was used, and a specific emphasis was placed on the objective identification of rainfall events, as well as on the discrimination of flow responses observed above the ID threshold. The results show that parameters used to identify rainfall events significantly affect the ID threshold and are likely to explain part of the threshold variability reported in the literature. This is especially the case regarding the minimum duration of rain interruption (MDRI) between two distinct rainfall events. In the Réal Torrent, a 3-h MDRI appears to be representative of the local rainfall regime. A systematic increase in the ID threshold with drainage area was also observed from the comparison of the three stations, as well as from the compilation of data from experimental debris-flow catchments. A logistic regression used to separate flow responses above the ID threshold, revealed that the best predictors are the 5-min maximum rainfall intensity, the 48-h antecedent rainfall, the rainfall amount and the number of days elapsed since the end of winter (used as a proxy of sediment supply). This emphasizes the critical role played by short intense rainfall sequences that are only detectable using high time-resolution rainfall records. It also highlights the significant influence of antecedent conditions and the seasonal fluctuations of sediment supply.
76 FR 4820 - New Mailing Standards for Domestic Mailing Services
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... prices for their parcels when they pay postage by any of the following three methods: Permit imprint... permit imprint, meet specific mailing requirements, and whose account volume exceeds a minimum threshold... prices in a calendar year. Pay for postage using a permit imprint. Enter a minimum of 500 pieces of mail...
NASA Astrophysics Data System (ADS)
Engel, Michael; Bertoldi, Giacomo; Notarnicola, Claudia; Comiti, Francesco
2017-04-01
To assess the performance of simulated snow cover of hydrological models, it is common practice to compare simulated data with observed ones derived from satellite images such as MODIS. However, technical and methodological limitations such as data availability of MODIS products, its spatial resolution or difficulties in finding appropriate parameterisations of the model need to be solved previously. Another important assumption usually made is the threshold of minimum simulated snow depth, generally set to 10 mm of snow depth, to respect the MODIS detection thresholds for snow cover. But is such a constant threshold appropriate for complex alpine terrain? How important is the impact of different snow depth thresholds on the spatial and temporal distribution of the pixel-based overall accuracy (OA)? To address this aspect, we compared the snow covered area (SCA) simulated by the GEOtop 2.0 snow model to the daily composite 250 m EURAC MODIS SCA in the upper Saldur basin (61 km2, Eastern Italian Alps) during the period October 2011 - October 2013. Initially, we calibrated the snow model against snow depths and snow water equivalents at point scale, taken from measurements at different meteorological stations. We applied different snow depth thresholds (0 mm, 10 mm, 50 mm, and 100 mm) to obtain the simulated snow cover and assessed the changes in OA both in time (during the entire evaluation period, accumulation and melting season) and space (entire catchment and specific areas of topographic characteristics such as elevation, slope, aspect, landcover, and roughness). Results show remarkable spatial and temporal differences in OA with respect to different snow depth thresholds. Inaccuracies of simulated and observed SCA during the accumulation season September to November 2012 were located in areas with north-west aspect, slopes of 30° or little elevation differences at sub-pixel scale (-0.25 to 0 m). We obtained best agreements with MODIS SCA for a snow depth threshold of 100 mm, leading to increased OA (> 0.8) in 13‰ of the catchment area. SCA agreement in January 2012 and 2013 was slightly limited by MODIS sensor detection due to shading effects and low illumination in areas exposed north-west to north. On the contrary, during the melting season in April 2013 and after the September 2013 snowfall event seemed to depend more on parameterisation than on snow depth thresholds. In contrast, inaccuracies during the melting season March to June 2013 could hardly be attributed to topographic characteristics and different snow depth thresholds but rather on model parameterisation. We identified specific conditions (p.e. specific snowfall events in autumn 2012 and spring 2013) when either MODIS data or the hydrological model was less accurate, thus justifying the need for improvements of precision in the snow cover detection algorithms or in the model's process description. In consequence, our study observations could support future snow cover evaluations in mountain areas, where spatially and temporally dynamic snow depth thresholds are transferred from the catchment scale to the regional scale. Keywords: snow cover, snow modelling, MODIS, snow depth sensitivity, alpine catchment
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Heat waves in Senegal : detection, characterization and associated processes.
NASA Astrophysics Data System (ADS)
Gnacoussa Sambou, Marie Jeanne; Janicot, Serge; Badiane, Daouda; Pohl, Benjamin; Dieng, Abdou L.; Gaye, Amadou T.
2017-04-01
Atmospheric configuration and synoptic evolution of patterns associated with Senegalese heat wave (HW) are examined on the period 1979-2014 using the Global Surface Summary of the Day (GSOD) observational database and ERA-Interim reanalysis. Since there is no objective and uniform definition of HW events, threshold methods based on atmospheric variables as daily maximum (Tmax) / minimum (Tmin) temperatures and daily mean apparent temperature (AT) are used to define HW threshold detection. Each criterion is related to a specific category of HW events: Tmax (warm day events), Tmin (warm night events) and AT (combining temperature and moisture). These definitions are used in order to characterize as well as possible the warm events over the Senegalese regions (oceanic versus continental region). Statistics on time evolution and spatial distribution of warm events are carried out over the 2 seasons of maximum temperature (March-May and October-November). For each season, a composite of HW events, as well as the most extended event over Senegal (as a case study) are analyzed using usual atmospheric fields (sea level pressure, geopotential height, total column water content, wind components, 2m temperature). This study is part of the project ACASIS (https://acasis.locean-ipsl.upmc.fr/doku.php) on heat waves occurrences over the Sahel and their impact on health. Keywords: heat wave, Senegal, ACASIS.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
High voltage threshold for stable operation in a dc electron gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Masahiro, E-mail: masahiro@post.kek.jp; Nishimori, Nobuyuki, E-mail: n-nishim@tagen.tohoku.ac.jp
We report clear observation of a high voltage (HV) threshold for stable operation in a dc electron gun. The HV hold-off time without any discharge is longer than many hours for operation below the threshold, while it is roughly 10 min above the threshold. The HV threshold corresponds to the minimum voltage where discharge ceases. The threshold increases with the number of discharges during HV conditioning of the gun. Above the threshold, the amount of gas desorption per discharge increases linearly with the voltage difference from the threshold. The present experimental observations can be explained by an avalanche discharge modelmore » based on the interplay between electron stimulated desorption (ESD) from the anode surface and subsequent secondary electron emission from the cathode by the impact of ionic components of the ESD molecules or atoms.« less
[Detection of intraorbital foreign material using MDCT].
Hoffstetter, P; Friedrich, C; Framme, C; Hoffstetter, M; Zorger, N; Stierstorfer, K; Ross, C; Uller, W; Müller-Wille, R; Rennert, J; Jung, E M; Schreyer, A G
2011-06-01
To judge the possibilities of detection of orbital foreign bodies in multidetector CT (MDCT) with a focus on glass slivers. Experimental systematic measuring of Hounsfield Units (HU) of 20 different materials, containing 16 different types of glass with 4 different types of ophthalmic lenses among them. The measurements were performed using a standardized protocol with an orbita phantom being scanned with 16-slice MDCT. Using the resulting density values, the smallest detectable volume was calculated. Using this data we produced slivers of 5 different glass types in the sub-millimeter range and calculated their volume. Those micro-slivers underwent another CT scan using the same protocol as mentioned above to experimentally discern and confirm the detection limit for micro-slivers made of different materials. Glass has comparatively high density values of at least 2000 HU. The density of glasses with strong refraction is significantly higher and reaches up to 12 400 HU. We calculated a minimum detectable volume of 0.07 mm (3) for glass with a density of 2000 HU. Only glass slivers with a density higher than 8300 HU were experimentally detectable in the sub-millimeter range up to a volume as small as 0.01 mm (3). Less dense glass slivers could not be seen, even though their volume was above the theoretically calculated threshold for detection. Due to its high density of at least 2000 HU, glass is usually easily recognizable as an orbital foreign body. The detection threshold depends on the object's density and size and can be as low as 0.01 mm (3) in the case of glass with strong refraction and thus high density. The detection of glass as an orbital foreign body seems to be secure for slivers with a volume of at least 0.2 mm (3) for all types of glass. © Georg Thieme Verlag KG Stuttgart · New York.
Bay, Line K.; Doyle, Jason; Logan, Murray; Berkelmans, Ray
2016-01-01
Sensitive molecular analyses show that most corals host a complement of Symbiodinium genotypes that includes thermo-tolerant types in low abundance. While tolerant symbiont types are hypothesized to facilitate tolerance to temperature and recovery from bleaching, empirical data on their distribution and relative abundance in corals under ambient and stress conditions are still rare. We quantified visual bleaching and mortality of coral hosts, along with relative abundance of C- and D-type Symbiodinium cells in 82 Acropora millepora colonies from three locations on the Great Barrier Reef transplanted to a central inshore site over a 13 month period. Our analyses reveal dynamic change in symbiont associations within colonies and among populations over time. Coral bleaching and declines in C- but not D-type symbionts were observed in transplanted corals. Survival and recovery of 25% of corals from one population was associated with either initial D-dominance or an increase in D-type symbionts that could be predicted by a minimum pre-stress D : C ratio of 0.003. One-third of corals from this population became D dominated at the bleached stage despite no initial detection of this symbiont type, but failed to recover and died in mid to late summer. These results provide a predictive threshold minimum density of background D-type symbionts in A. millepora, above which survival following extreme thermal stress is increased. PMID:27429786
The relationship between stereoacuity and stereomotion thresholds.
Cumming, B G
1995-01-01
There are in principle at least two binocular sources of information that could be used to determine the motion of an object towards or away from an observer; such motion produces changes in binocular disparities over time and also generates different image velocities in the two eyes. It has been argued in the past that stereomotion is detected by a mechanism that is independent of that which detects static disparities. More recently this conclusion has been questioned. If stereomotion detection in fact depends upon detecting disparities, there should be a clear correlation between static stereo-detection thresholds and stereomotion thresholds. If the systems are separate, there need be no such correlation. Four types of threshold measurement were performed by means of random-dot stereograms: (1) static stereo detection/discrimination; (2) stereomotion detection in random-dot stereograms (temporally uncorrelated); (3) stereomotion detection in temporally correlated random-dot stereograms; and (4) binocular detection of frontoparallel motion. Three normal subjects and five subjects with unusually high stereoacuities were studied. In addition, two manipulations were performed that altered stereomotion thresholds: changes in mean disparity, and image defocus produced by positive spectacle lenses. Across subjects and conditions, stereomotion thresholds were well correlated with stereo-discrimination thresholds. Stereomotion was poorly correlated with binocular frontoparallel-motion thresholds. These results suggest that stereomotion is detected by means of registering changes in the output of the same disparity detectors that are used to detect static disparities.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Quantifying environmental limiting factors on tree cover using geospatial data.
Greenberg, Jonathan A; Santos, Maria J; Dobrowski, Solomon Z; Vanderbilt, Vern C; Ustin, Susan L
2015-01-01
Environmental limiting factors (ELFs) are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1) Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2) How are the ELFs distributed spatially? 3) To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET) and PET minus precipitation (PET-P) as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4) environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover < 25%) were primarily limited by cold mean temperatures, open-canopy forest sites (tree cover between 25% and 60%) were primarily limited by evaporative demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range.
Picotte, Joshua J.; Coan, Michael; Howard, Stephen M.
2014-01-01
The effort to utilize satellite-based MODIS, AVHRR, and GOES fire detections from the Hazard Monitoring System (HMS) to identify undocumented fires in Florida and improve the Monitoring Trends in Burn Severity (MTBS) mapping process has yielded promising results. This method was augmented using regression tree models to identify burned/not-burned pixels (BnB) in every Landsat scene (1984–2012) in Worldwide Referencing System 2 Path/Rows 16/40, 17/39, and 1839. The burned area delineations were combined with the HMS detections to create burned area polygons attributed with their date of fire detection. Within our study area, we processed 88,000 HMS points (2003–2012) and 1,800 Landsat scenes to identify approximately 300,000 burned area polygons. Six percent of these burned area polygons were larger than the 500-acre MTBS minimum size threshold. From this study, we conclude that the process can significantly improve understanding of fire occurrence and improve the efficiency and timeliness of assessing its impacts upon the landscape.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
... standard SPY options, (2) codify the minimum contract threshold requirement for the execution of Jumbo SPY... Fund (``IWM''). QQQQ, SPY and IWM are quoted in $0.01 increments for all options series. This proposed... Exchange believes that by reducing the minimum trading increments for Jumbo SPY Options to $0.01, the...
30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.
Code of Federal Regulations, 2012 CFR
2012-07-01
... threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... measures when a standard threshold shift is detected. The mine operator must, within 30 calendar days of receiving evidence or confirmation of a standard threshold shift, unless a physician or audiologist...
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Detection thresholds for gaps, overlaps, and no-gap-no-overlaps.
Heldner, Mattias
2011-07-01
Detection thresholds for gaps and overlaps, that is acoustic and perceived silences and stretches of overlapping speech in speaker changes, were determined. Subliminal gaps and overlaps were categorized as no-gap-no-overlaps. The established gap and overlap detection thresholds both corresponded to the duration of a long vowel, or about 120 ms. These detection thresholds are valuable for mapping the perceptual speaker change categories gaps, overlaps, and no-gap-no-overlaps into the acoustic domain. Furthermore, the detection thresholds allow generation and understanding of gaps, overlaps, and no-gap-no-overlaps in human-like spoken dialogue systems. © 2011 Acoustical Society of America
Sousa, Ana Constantino; Didoné, Dayane Domeneghini; Sleifer, Pricila
2017-01-01
Introduction Preterm neonates are at risk of changes in their auditory system development, which explains the need for auditory monitoring of this population. The Auditory Steady-State Response (ASSR) is an objective method that allows obtaining the electrophysiological thresholds with greater applicability in neonatal and pediatric population. Objective The purpose of this study is to compare the ASSR thresholds in preterm and term infants evaluated during two stages. Method The study included 63 normal hearing neonates: 33 preterm and 30 term. They underwent assessment of ASSR in both ears simultaneously through insert phones in the frequencies of 500 to 4000Hz with the amplitude modulated from 77 to 103Hz. We presented the intensity at a decreasing level to detect the minimum level of responses. At 18 months, 26 of 33 preterm infants returned for the new assessment for ASSR and were compared with 30 full-term infants. We compared between groups according to gestational age. Results Electrophysiological thresholds were higher in preterm than in full-term neonates ( p < 0.05) at the first testing. There were no significant differences between ears and gender. At 18 months, there was no difference between groups ( p > 0.05) in all the variables described. Conclusion In the first evaluation preterm had higher thresholds in ASSR. There was no difference at 18 months of age, showing the auditory maturation of preterm infants throughout their development. PMID:28680486
Almási, Róbert; Pethö, Gábor; Bölcskei, Kata; Szolcsányi, János
2003-01-01
An increasing-temperature hot plate (ITHP) was introduced to measure the noxious heat threshold (45.3±0.3°C) of unrestrained rats, which was reproducible upon repeated determinations at intervals of 5 or 30 min or 1 day. Morphine, diclofenac and paracetamol caused an elevation of the noxious heat threshold following i.p. pretreatment, the minimum effective doses being 3, 10 and 200 mg kg−1, respectively. Unilateral intraplantar injection of the VR1 receptor agonist resiniferatoxin (RTX, 0.048 nmol) induced a profound drop of heat threshold to the innocuous range with a maximal effect (8–10°C drop) 5 min after RTX administration. This heat allodynia was inhibited by pretreatment with morphine, diclofenac and paracetamol, the minimum effective doses being 1, 1 and 100 mg kg−1 i.p., respectively. The long-term sensory desensitizing effect of RTX was examined by bilateral intraplantar injection (0.048 nmol per paw) which produced, after an initial threshold drop, an elevation (up to 2.9±0.5°C) of heat threshold lasting for 5 days. The VR1 receptor antagonist iodo-resiniferatoxin (I-RTX, 0.05 nmol intraplantarly) inhibited by 51% the heat threshold-lowering effect of intraplantar RTX but not α,β-methylene-ATP (0.3 μmol per paw). I-RTX (0.1 or 1 nmol per paw) failed to alter the heat threshold either acutely (5–60 min) or on the long-term (5 days). The heat threshold of VR1 receptor knockout mice was not different from that of wild-type animals (45.6±0.5 vs 45.2±0.4°C). In conclusion, the RTX-induced drop of heat threshold measured by the ITHP is a novel heat allodynia model exhibiting a high sensitivity to analgesics. PMID:12746222
Lane change warning threshold based on driver perception characteristics.
Wang, Chang; Sun, Qinyu; Fu, Rui; Li, Zhen; Zhang, Qiong
2018-08-01
Lane Change Warning system (LCW) is exploited to alleviate driver workload and improve the safety performance of lane changes. Depending on the secure threshold, the lane change warning system could transmit caution to drivers. Although the system possesses substantial benefits, it may perturb the conventional operating of the driver and affect driver judgment if the warning threshold does not conform to the driver perception of safety. Therefore, it is essential to establish an appropriate warning threshold to enhance the accuracy rate and acceptability of the lane change warning system. This research aims to identify the threshold that conforms to the driver perception of the ability to safely change lanes with a rear vehicle fast approaching. We propose a theoretical warning model of lane change based on a safe minimum distance and deceleration of the rear vehicle. For the purpose of acquiring the different safety levels of lane changes, 30 licensed drivers are recruited and we obtain the extreme moments represented by driver perception characteristics from a Front Extremity Test and a Rear Extremity Test implemented on the freeway. The required deceleration of the rear vehicle corresponding to the extreme time is calculated according to the proposed model. In light of discrepancies in the deceleration in these extremity experiments, we determine two levels of a hierarchical warning system. The purpose of the primary warning is to remind drivers of the existence of potentially dangerous vehicles and the second warning is used to warn the driver to stop changing lanes immediately. We use the signal detection theory to analyze the data. Ultimately, we confirm that the first deceleration threshold is 1.5 m/s 2 and the second deceleration threshold is 2.7 m/s 2 . The findings provide the basis for the algorithm design of LCW and enhance the acceptability of the intelligent system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Critical oxygen levels and metabolic suppression in oceanic oxygen minimum zones.
Seibel, Brad A
2011-01-15
The survival of oceanic organisms in oxygen minimum zones (OMZs) depends on their total oxygen demand and the capacities for oxygen extraction and transport, anaerobic ATP production and metabolic suppression. Anaerobic metabolism and metabolic suppression are required for daytime forays into the most extreme OMZs. Critical oxygen partial pressures are, within a range, evolved to match the minimum oxygen level to which a species is exposed. This fact demands that low oxygen habitats be defined by the biological response to low oxygen rather than by some arbitrary oxygen concentration. A broad comparative analysis of oxygen tolerance facilitates the identification of two oxygen thresholds that may prove useful for policy makers as OMZs expand due to climate change. Between these thresholds, specific physiological adaptations to low oxygen are required of virtually all species. The lower threshold represents a limit to evolved oxygen extraction capacity. Climate change that pushes oxygen concentrations below the lower threshold (~0.8 kPa) will certainly result in a transition from an ecosystem dominated by a diverse midwater fauna to one dominated by diel migrant biota that must return to surface waters at night. Animal physiology and, in particular, the response of animals to expanding hypoxia, is a critical, but understudied, component of biogeochemical cycles and oceanic ecology. Here, I discuss the definition of hypoxia and critical oxygen levels, review adaptations of animals to OMZs and discuss the capacity for, and prevalence of, metabolic suppression as a response to temporary residence in OMZs and the possible consequences of climate change on OMZ ecology.
Rotstein, Arie; Dotan, Raffy; Zigel, Levana; Greenberg, Tally; Benyamini, Yael; Falk, Bareket
2007-12-01
The purpose of this study was to investigate the effect of pre-test carbohydrate (CHO) ingestion on anaerobic-threshold assessment using the lactate-minimum test (LMT). Fifteen competitive male distance runners capable of running 10 km in 33.5-43 min were used as subjects. LMT was performed following CHO (2x300 mL, 7% solution) or comparable placebo (Pl) ingestion, in a double-blind, randomized order. The LMT consisted of two high-intensity 1 min treadmill runs (17-21 km.h(-1)), followed by an 8 min recovery period. Subsequently, subjects performed 5 min running stages, incremented by 0.6 km.h(-1) and separated by 1 min blood-sampling intervals. Tests were terminated after 3 consecutive increases in blood-lactate concentration ([La]) had been observed. Finger-tip capillary blood was sampled for [La] and blood-glucose determination 30 min before the test's onset, during the recovery phase following the 2 high-intensity runs, and following each of the subsequent 5 min stages. Heart rate (HR) and rating of perceived exertion (RPE) were recorded after each stage. The lactate-minimum speed (LMS) was determined from the individual [La]-velocity plots and was considered reflective of the anaerobic threshold. Pre-test CHO ingestion had no effect on LMS (13.19+/-1.12 km.h(-1) vs. 13.17+/-1.08 km.h(-1) in CHO and Pl, respectively), nor on [La] and glucose concentration at that speed, or on HR and RPE responses. Pre-test CHO ingestion therefore does not affect LMS or the LMT-estimated anaerobic threshold.
Uribe-Leitz, Tarsicio; Esquivel, Micaela M; Molina, George; Lipsitz, Stuart R; Verguet, Stéphane; Rose, John; Bickler, Stephen W; Gawande, Atul A; Haynes, Alex B; Weiser, Thomas G
2015-09-01
We previously identified a range of 4344-5028 annual operations per 100,000 people to be related to desirable health outcomes. From this and other evidence, the Lancet Commission on Global Surgery recommends a minimum rate of 5000 operations per 100,000 people. We evaluate rates of growth and estimate the time it will take to reach this minimum surgical rate threshold. We aggregated country-level surgical rate estimates from 2004 to 2012 into the twenty-one Global Burden of Disease (GBD) regions. We calculated mean rates of surgery proportional to population size for each year and assessed the rate of growth over time. We then extrapolated the time it will take each region to reach a surgical rate of 5000 operations per 100,000 population based on linear rates of change. All but two regions experienced growth in their surgical rates during the past 8 years. Fourteen regions did not meet the recommended threshold in 2012. If surgical capacity continues to grow at current rates, seven regions will not meet the threshold by 2035. Eastern Sub-Saharan Africa will not reach the recommended threshold until 2124. The rates of growth in surgical service delivery are exceedingly variable. At current rates of surgical and population growth, 6.2 billion people (73% of the world's population) will be living in countries below the minimum recommended rate of surgical care in 2035. A strategy for strengthening surgical capacity is essential if these targets are to be met in a timely fashion as part of the integrated health system development.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Ab initio molecular dynamics simulations of low energy recoil events in MgO
Petersen, B. A.; Liu, B.; Weber, W. J.; ...
2017-01-11
In this paper, low-energy recoil events in MgO are studied using ab initio molecular dynamics simulations to reveal the dynamic displacement processes and final defect configurations. Threshold displacement energies, E d, are obtained for Mg and O along three low-index crystallographic directions, [100], [110], and [111]. The minimum values for E d are found along the [110] direction consisting of the same element, either Mg or O atoms. Minimum threshold values of 29.5 eV for Mg and 25.5 eV for O, respectively, are suggested from the calculations. For other directions, the threshold energies are considerably higher, 65.5 and 150.0 eVmore » for O along [111] and [100], and 122.5 eV for Mg along both [111] and [100] directions, respectively. These results show that the recoil events in MgO are partial-charge transfer assisted processes where the charge transfer plays an important role. Finally, there is a similar trend found in other oxide materials, where the threshold displacement energy correlates linearly with the peak partial-charge transfer, suggesting this behavior might be universal in ceramic oxides.« less
Elevation of pain threshold by vaginal stimulation in women.
Whipple, B; Komisaruk, B R
1985-04-01
In 2 studies with 10 women each, vaginal self-stimulation significantly increased the threshold to detect and tolerate painful finger compression, but did not significantly affect the threshold to detect innocuous tactile stimulation. The vaginal self-stimulation was applied with a specially designed pressure transducer assembly to produce a report of pressure or pleasure. In the first study, 6 of the women perceived the vaginal stimulation as producing pleasure. During that condition, the pain tolerance threshold increased significantly by 36.8% and the pain detection threshold increased significantly by 53%. A second study utilized other types of stimuli. Vaginal self-stimulation perceived as pressure significantly increased the pain tolerance threshold by 40.3% and the pain detection threshold by 47.4%. In the second study, when the vaginal stimulation was self-applied in a manner that produced orgasm, the pain tolerance threshold and pain detection threshold increased significantly by 74.6% and 106.7% respectively, while the tactile threshold remained unaffected. A variety of control conditions, including various types of distraction, did not significantly elevate pain or tactile thresholds. We conclude that in women, vaginal self-stimulation decreases pain sensitivity, but does not affect tactile sensitivity. This effect is apparently not due to painful or non-painful distraction.
NASA Astrophysics Data System (ADS)
Lattante, Sandro; De Giorgi, Maria Luisa; Pasini, Mariacecilia; Anni, Marco
2017-10-01
Amongst the different optoelectronic applications of conjugated polymers, the development of new active materials for optically pumped organic lasers is still an open question particularly in the blue-near UV spectral range. We investigate the emission properties of poly[(9,9-dioctylfluorene-2,7-dyil)- alt-p-phenylene] (PFP) neat films under nanosecond pump. We demonstrate that thanks to the introduction of a phenylene moiety between two fluorene units it is possible to obtain Amplified Spontaneous Emission (ASE) with a lower threshold and a blue shifted wavelength with respect to poly(9,9-dioctylfluorene) (PFO). We demonstrate efficient ASE with a minimum threshold as low as 23 μJcm-2 and a minimum ASE wavelength of 436 nm. A maximum net optical gain of about 26 cm-1 is measured at an excitation density of 0.23 mJcm-2. These results make the PFP a good active material for optically pumped deep blue organic lasers.
Quantifying ecological thresholds from response surfaces
Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh
2011-01-01
Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...
[Microbial air monitoring in operating theatre: active and passive samplings].
Pasquarella, C; Masia, M D; Nnanga, Nga; Sansebastiano, G E; Savino, A; Signorelli, C; Veronesi, L
2004-01-01
Microbial air contamination was evaluated in 11 operating theatres using active and passive samplings. SAS (Surface Air System) air sampling was used to evaluate cfu/m3 and settle plates were used to measure the index of microbial air contamination (IMA). Samplings were performed at the same time on three different days, at three different times (before, during and after the surgical activity). Two points were monitored (patient area and perimeter of the operating theatre). Moreover, the cfu/m3 were evaluated at the air inlet of the conditioner system. 74.7% of samplings performed at the air inlet and 66.7% of the samplings performed at the patient area before the beginning of the surgical activity (at rest) exceeded the 35 cfu/m3 used as threshold value. 100% of IMA values exceeded the threshold value of 5. Using both active and passive sampling, the microbial contamination was shown to increase significantly during activity. The cfu values were higher at the patient area than at the perimeter of the operating theatre. Mean values of the cfu/m3 during activity at the patient area ranged from a minimum of 61+/-41 cfu/m3 to a maximum of 242+/-136 cfu/m3; IMA values ranged from a minimum of 19+/-10 to a maximum of 129+/-60. 15.2% of samplings performed at the patient area using SAS and 75.8% of samplings performed using settle plates exceeded the threshold values of 180 cfu/m3 and 25 respectively, with a significant difference of the percentages. The highest values were found in the operating theatre with inadequate structural and managerial conditions. These findings confirm that the microbiological quality of air may be considered a mirror of the hygienic conditions of the operating theatre. Settle plates proved to be more sensitive in detecting the increase of microbial air contamination related to conditions that could compromise the quality of the air in operating theatres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liss, Stephanie A.; Brown, Richard S.; Deters, Katherine A.
A cylindrical acoustic transmitter (AT; 0.2 g) has been developed for injection into the peritoneum of fish. Laboratory studies can provide tagging guidelines to minimize the effect of implantation techniques and transmitter burden (relative weight of the transmitter to the weight of the fish) in fish before a transmitter is used in field studies. The goal of this study was to examine response variables (mortality, transmitter expulsion, growth, wound area) of juvenile Chinook Salmon (Oncorhynchus tschawytscha; 65–104 mm fork length [FL]) injected with an AT along a wide range of sizes that could lead to a guideline for minimizing taggingmore » effects. The overarching goal was to determine a minimum size threshold for fish that can be injected, while minimizing adverse transmitter effects. Juveniles (n = 700) were separated into four treatments: (1) acoustic transmitter injection (AT), (2) AT and a passive integrated transponder tag injection (AT+PIT), (3) visual implant elastomer injection (Marked control), and (4) unmarked (Unmarked control). Fish were evaluated weekly for four weeks, and again at the end of the study (60 d post-tagging). Fish injected with an AT or an AT+PIT experienced greater mortality than Marked controls. By 60 d post-tagging, transmitter expulsion was 44% for AT fish and 20% for AT+PIT fish. Fish injected with an AT or an AT+PIT grew (FL and weight gain) significantly less than Marked controls, and no minimum size thresholds were detected. Finally, initial size (FL) significantly affected wound area in AT and AT+PIT fish. A size threshold was only identified on Day 7 (85.1 mm) for AT+PIT fish, indicating that wound areas in fish < 85.1 mm were larger than wound areas of fish > 85.1 mm. This research suggests that injecting juveniles with an AT or an AT+PIT had a greater effect on smaller fish than larger fish.« less
49 CFR 38.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2011 CFR
2011-10-01
...)(1) Doors shall have a minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions...
49 CFR 38.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2014 CFR
2014-10-01
...)(1) Doors shall have a minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions...
36 CFR § 1192.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2013 CFR
2013-07-01
... have a minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions of hinges or...
36 CFR 1192.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2012 CFR
2012-07-01
... minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions of hinges or other operating...
49 CFR 38.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2013 CFR
2013-10-01
...)(1) Doors shall have a minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions...
36 CFR 1192.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2014 CFR
2014-07-01
... minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions of hinges or other operating...
36 CFR 1192.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions of hinges or other operating...
49 CFR 38.153 - Doors, steps and thresholds.
Code of Federal Regulations, 2012 CFR
2012-10-01
...)(1) Doors shall have a minimum clear width when open of 30 inches (760 mm), measured from the lowest step to a height of at least 48 inches (1220 mm), from which point they may taper to a minimum width of 18 inches (457 mm). The clear width may be reduced by a maximum of 4 inches (100 mm) by protrusions...
National Earthquake Information Center Seismic Event Detections on Multiple Scales
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10.1785/gssrl.83.3.531.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
NASA Astrophysics Data System (ADS)
Virro, A. L.; Eliseev, P. G.; Lyuk, P. A.; Fridental, Ya K.; Khaller, Yu E.
1988-11-01
An experimental dependence of the threshold current density jth on the thickness of the active region was used to find the reduced threshold current density for AlGaAsSb (λ = 1.59μm, T = 295K) lasers: this density was 8 kA·cm-2·μm-1. The minimum threshold current was jth = 1.8 kA/cm2. Wide-contact lasers exhibited cw operation down to 175 K.
Improvements of low-level radioxenon detection sensitivity by a state-of-the art coincidence setup.
Cagniant, A; Le Petit, G; Gross, P; Douysset, G; Richard-Bressand, H; Fontaine, J-P
2014-05-01
The ability to quantify isotopic ratios of 135, 133 m, 133 and 131 m radioxenon is essential for the verification of the Comprehensive Nuclear-Test Ban Treaty (CTBT). In order to improve detection limits, CEA has developed a new on-site setup using photon/electron coincidence (Le Petit et al., 2013. J. Radioanal. Nucl. Chem., DOI : 10.1007/s 10697-013-2525-8.). Alternatively, the electron detection cell equipped with large silicon chips (PIPS) can be used with HPGe detector for laboratory analysis purpose. This setup allows the measurement of β/γ coincidences for the detection of (133)Xe and (135)Xe; and K-shell Conversion Electrons (K-CE)/X-ray coincidences for the detection of (131m)Xe, (133m)Xe and (133)Xe as well. Good energy resolution of 11 keV at 130 keV and low energy threshold of 29 keV for the electron detection were obtained. This provides direct discrimination between K-CE from (133)Xe, (133m)Xe and (131m)Xe. Estimation of Minimum Detectable Activity (MDA) for (131m)Xe is in the order of 1mBq over a 4 day measurement. An analysis of an environmental radioxenon sample using this method is shown. © 2013 The Authors. Published by Elsevier Ltd All rights reserved.
The perception of complex tones by a false killer whale (Pseudorca crassidens).
Yuen, Michelle M L; Nachtigall, Paul E; Breese, Marlee; Vlachos, Stephanie A
2007-03-01
Complex tonal whistles are frequently produced by some odontocete species. However, no experimental evidence exists regarding the detection of complex tones or the discrimination of harmonic frequencies by a marine mammal. The objectives of this investigation were to examine the ability of a false killer whale to discriminate pure tones from complex tones and to determine the minimum intensity level of a harmonic tone required for the whale to make the discrimination. The study was conducted with a go/no-go modified staircase procedure. The different stimuli were complex tones with a fundamental frequency of 5 kHz with one to five harmonic frequencies. The results from this complex tone discrimination task demonstrated: (1) that the false killer whale was able to discriminate a 5 kHz pure tone from a complex tone with up to five harmonics, and (2) that discrimination thresholds or minimum intensity levels exist for each harmonic combination measured. These results indicate that both frequency level and harmonic content may have contributed to the false killer whale's discrimination of complex tones.
Precipitation thresholds for triggering floods in Corgo hydrographic basin (Northern Portugal)
NASA Astrophysics Data System (ADS)
Santos, Monica; Fragoso, Marcelo
2016-04-01
The precipitation is a major cause of natural hazards and is therefore related to the flood events (Borga et al., 2011; Gaál et al., 2014; Wilhelmi & Morss, 2013). The severity of a precipitation event and their potential damage is dependent on the total amount of rain but also on the intensity and duration event (Gaál et al., 2014). In this work, it was established thresholds based on critical combinations: amount / duration of flood events with daily rainfall data for Corgo hydrographic basin, in northern Portugal. In Corgo basin are recorded 31 floods events between 1865 and 2011 (Santos et al., 2015; Zêzere et al., 2014). We determined the minimum, maximum and pre-warning thresholds that define the boundaries so that an event may occur. Additionally, we applied these thresholds to different flood events occurred in the past in the study basin. The results show that the ratio between the flood events and precipitation events that occur above the minimum threshold has relatively low probability of a flood happen. These results may be related to the reduced number of floods events (only those that caused damage reported by the media and produced some type of damage). The maximum threshold is not useful for floods forecasting, since the majority of true positives are below this limit. The retrospective analysis of the thresholds defined suggests that the minimum and pre warning thresholds are well adjusted. The application of rainfall thresholds contribute to minimize possible situations of pre-crisis or immediate crisis, reducing the consequences and the resources involved in emergency response of flood events. References Borga, M., Anagnostou, E. N., Blöschl, G., & Creutin, J. D. (2011). Flash flood forecasting, warning and risk management: the HYDRATE project. Environmental Science & Policy, 14(7), 834-844. doi: 10.1016/j.envsci.2011.05.017 Gaál, L., Molnar, P., & Szolgay, J. (2014). Selection of intense rainfall events based on intensity thresholds and lightning data in Switzerland. Hydrol. Earth Syst. Sci., 18(5), 1561-1573. doi: 10.5194/hess-18-1561-2014 Santos, M., Santos, J. A., & Fragoso, M. (2015). Historical damaging flood records for 1871-2011 in Northern Portugal and underlying atmospheric forcings. Journal of Hydrology, 530, 591-603. doi: 10.1016/j.jhydrol.2015.10.011 Wilhelmi, O. V., & Morss, R. E. (2013). Integrated analysis of societal vulnerability in an extreme precipitation event: A Fort Collins case study. Environmental Science & Policy, 26, 49-62. doi: 10.1016/j.envsci.2012.07.005 Zêzere, J. L., Pereira, S., Tavares, A. O., Bateira, C., Trigo, R. M., Quaresma, I., Santos, P. P., Santos, M., & Verde, J. (2014). DISASTER: a GIS database on hydro-geomorphologic disasters in Portugal. Nat. Hazards, 1-30. doi: 10.1007/s11069-013-1018-y
Noise frame duration, masking potency and whiteness of temporal noise.
Kukkonen, Heljä; Rovamo, Jyrki; Donner, Kristian; Tammikallio, Marja; Raninen, Antti
2002-09-01
Because of the limited contrast range, increasing the duration of the noise frame is often the only option for increasing the masking potency of external, white temporal noise. This, however, reduces the high-frequency cutoff beyond which noise is no longer white. This study was conducted to determine the longest noise frame duration that produces the strongest masking effect and still mimics white noise on the detection of sinusoidal flicker. Contrast energy thresholds (E(th)) were measured for flicker at 1.25 to 20 Hz in strong, purely temporal (spatially uniform), additive, external noise. The masking power of white external noise, characterized by its spectral density at zero frequency N0, increases with the duration of the noise frame. For short noise frame durations, E(th) increased in direct proportion to N0, keeping the nominal signal-to-noise ratio [SNR = (E(th)/N0)(0.5)] constant at threshold. The masking effect thus increased with the duration of the noise frame and the noise mimicked white noise. When noise frame duration and N0 increased further, the nominal SNR at threshold started to decrease, indicating that noise no longer mimicked white noise. The minimum number of noise frames per flicker cycle needed to mimic white noise decreased with increasing flicker frequency from 8.3 at 1.25 Hz to 1.6 at 20 Hz. The critical high-frequency cutoff of detection-limiting temporal noise in terms of noise frames per signal cycle depends on the temporal frequency of the signal. This is opposite to the situation in the spatial domain and must be taken into consideration when temporal signals are masked with temporal noise.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Linking the micro and macro: L-H transition dynamics and threshold physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malkov, M. A., E-mail: mmalkov@ucsd.edu; Diamond, P. H.; Miki, K.
2015-03-15
The links between the microscopic dynamics and macroscopic threshold physics of the L → H transition are elucidated. Emphasis is placed on understanding the physics of power threshold scalings, and especially on understanding the minimum in the power threshold as a function of density P{sub thr} (n). By extending a numerical 1D model to evolve both electron and ion temperatures, including collisional coupling, we find that the decrease in P{sub thr} (n) along the low-density branch is due to the combination of an increase in collisional electron-to-ion energy transfer and an increase in the heating fraction coupled to the ions.more » Both processes strengthen the edge diamagnetic electric field needed to lock in the mean electric field shear for the L→H transition. The increase in P{sub thr} (n) along the high-density branch is due to the increase with ion collisionality of damping of turbulence-driven shear flows. Turbulence driven shear flows are needed to trigger the transition by extracting energy from the turbulence. Thus, we identify the critical transition physics components of the separatrix ion heat flux and the zonal flow excitation. The model reveals a power threshold minimum in density scans as a crossover between the threshold decrease supported by an increase in heat fraction received by ions (directly or indirectly, from electrons) and a threshold increase, supported by the rise in shear flow damping. The electron/ion heating mix emerges as important to the transition, in that it, together with electron-ion coupling, regulates the edge diamagnetic electric field shear. The importance of possible collisionless electron-ion heat transfer processes is explained.« less
The globular cluster-dark matter halo connection
NASA Astrophysics Data System (ADS)
Boylan-Kolchin, Michael
2017-12-01
I present a simple phenomenological model for the observed linear scaling of the stellar mass in old globular clusters (GCs) with z = 0 halo mass in which the stellar mass in GCs scales linearly with progenitor halo mass at z = 6 above a minimum halo mass for GC formation. This model reproduces the observed MGCs-Mhalo relation at z = 0 and results in a prediction for the minimum halo mass at z = 6 required for hosting one GC: Mmin(z = 6) = 1.07 × 109 M⊙. Translated to z = 0, the mean threshold mass is Mhalo(z = 0) ≈ 2 × 1010 M⊙. I explore the observability of GCs in the reionization era and their contribution to cosmic reionization, both of which depend sensitively on the (unknown) ratio of GC birth mass to present-day stellar mass, ξ. Based on current detections of z ≳ 6 objects with M1500<-17, values of ξ > 10 are strongly disfavoured; this, in turn, has potentially important implications for GC formation scenarios. Even for low values of ξ, some observed high-z galaxies may actually be GCs, complicating estimates of reionization-era galaxy ultraviolet luminosity functions and constraints on dark matter models. GCs are likely important reionization sources if 5 ≲ ξ ≲ 10. I also explore predictions for the fraction of accreted versus in situ GCs in the local Universe and for descendants of systems at the halo mass threshold of GC formation (dwarf galaxies). An appealing feature of the model presented here is the ability to make predictions for GC properties based solely on dark matter halo merger trees.
A Short-term ESPERTA-based Forecast Tool for Moderate-to-extreme Solar Proton Events
NASA Astrophysics Data System (ADS)
Laurenza, M.; Alberti, T.; Cliver, E. W.
2018-04-01
The ESPERTA (Empirical model for Solar Proton Event Real Time Alert) forecast tool has a Probability of Detection (POD) of 63% for all >10 MeV events with proton peak intensity ≥10 pfu (i.e., ≥S1 events, S1 referring to minor storms on the NOAA Solar Radiation Storms scale), from 1995 to 2014 with a false alarm rate (FAR) of 38% and a median (minimum) warning time (WT) of ∼4.8 (0.4) hr. The NOAA space weather scale includes four additional categories: moderate (S2), strong (S3), severe (S4), and extreme (S5). As S1 events have only minor impacts on HF radio propagation in the polar regions, the effective threshold for significant space radiation effects appears to be the S2 level (100 pfu), above which both biological and space operation impacts are observed along with increased effects on HF propagation in the polar regions. We modified the ESPERTA model to predict ≥S2 events and obtained a POD of 75% (41/55) and an FAR of 24% (13/54) for the 1995–2014 interval with a median (minimum) WT of ∼1.7 (0.2) hr based on predictions made at the time of the S1 threshold crossing. The improved performance of ESPERTA for ≥S2 events is a reflection of the big flare syndrome, which postulates that the measures of the various manifestations of eruptive solar flares increase as one considers increasingly larger events.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Karczmarski, Leszek; Huang, Shiang-Lin; Chan, Stephen C Y
2017-02-23
Defining demographic and ecological threshold of population persistence can assist in informing conservation management. We undertook such analyses for the Indo-Pacific humpback dolphin (Sousa chinensis) in the Pearl River Delta (PRD) region, southeast China. We use adult survival estimates for assessments of population status and annual rate of change. Our estimates indicate that, given a stationary population structure and minimal risk scenario, ~2000 individuals (minimum viable population in carrying capacity, MVP k ) can maintain the population persistence across 40 generations. However, under the current population trend (~2.5% decline/annum), the population is fast approaching its viability threshold and may soon face effects of demographic stochasticity. The population demographic trajectory and the minimum area of critical habitat (MACH) that could prevent stochastic extinction are both highly sensitive to fluctuations in adult survival. For a hypothetical stationary population, MACH should approximate 3000-km 2 . However, this estimate increases four-fold with a 5% increase of adult mortality and exceeds the size of PRD when calculated for the current population status. On the other hand, cumulatively all current MPAs within PRD fail to secure the minimum habitat requirement to accommodate sufficiently viable population size. Our findings indicate that the PRD population is deemed to become extinct unless effective conservation measures can rapidly reverse the current population trend.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
NASA Astrophysics Data System (ADS)
Elangovan, Premkumar; Mackenzie, Alistair; Dance, David R.; Young, Kenneth C.; Wells, Kevin
2018-05-01
This work investigates the detection performance of specialist and non-specialist observers for different targets in 2D-mammography and digital breast tomosynthesis (DBT) using the OPTIMAM virtual clinical trials (VCT) Toolbox and a 4-alternative forced choice (4AFC) assessment paradigm. Using 2D-mammography and DBT images of virtual breast phantoms, we compare the detection limits of simple uniform spherical targets and irregular solid masses. Target diameters of 4 mm and 6 mm have been chosen to represent target sizes close to the minimum detectable size found in breast screening, across a range of controlled contrast levels. The images were viewed by a set of specialist observers (five medical physicists and six experienced clinical readers) and five non-specialists. Combined results from both observer groups indicate that DBT has a significantly lower detectable threshold contrast than 2D-mammography for small masses (4 mm: 2.1% [DBT] versus 6.9% [2D]; 6 mm: 0.7% [DBT] versus 3.9% [2D]) and spheres (4 mm: 2.9% [DBT] versus 5.3% [2D]; 6 mm: 0.3% [DBT] versus 2.2% [2D]) (p < 0.0001). Both observer groups found spheres significantly easier to detect than irregular solid masses for both sizes and modalities (p < 0.0001) (except 4 mm DBT). The detection performances of specialist and non-specialist observers were generally found to be comparable, where each group marginally outperformed the other in particular detection tasks. Within the specialist group, the clinical readers performed better than the medical physicists with irregular masses (p < 0.0001). The results indicate that using spherical targets in such studies may produce over-optimistic detection thresholds compared to more complex masses, and that the superiority of DBT for detecting masses over 2D-mammography has been quantified. The results also suggest specialist observers may be supplemented by non-specialist observers (with training) in some types of 4AFC studies.
Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.
de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique
2012-07-01
Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gyo, K; Yanagihara, N
1986-01-01
Ossicular mobility was assessed by direct coupling of a piezoelectric ceramic vibrator to the ossicles during middle ear surgery. The sites excited were body of the incus, head of the stapes, and footplate of the stapes through a hydroxyapatite ceramic strut. The threshold of the vibratory hearing was determined by the patient's response as a minimum audition, and the vibration threshold was obtained by subtracting the preoperative bone conduction threshold from the vibratory hearing threshold. The results were analyzed by the state of hearing after the operation, which revealed that a patient with a good vibration threshold during the operation had a tendency to get good postoperative hearing. This may mean that postoperative hearing can be predicted to some extent during the operation by the measurement of ossicular mobility.
Optimal glottal configuration for ease of phonation.
Lucero, J C
1998-06-01
Recent experimental studies have shown the existence of optimal values of the glottal width and convergence angle, at which the phonation threshold pressure is minimum. These results indicate the existence of an optimal glottal configuration for ease of phonation, not predicted by the previous theory. In this paper, the origin of the optimal configuration is investigated using a low dimensional mathematical model of the vocal fold. Two phenomena of glottal aerodynamics are examined: pressure losses due to air viscosity, and air flow separation from a divergent glottis. The optimal glottal configuration seems to be a consequence of the combined effect of both factors. The results agree with the experimental data, showing that the phonation threshold pressure is minimum when the vocal folds are slightly separated in a near rectangular glottis.
Bachman, Daniel; Chen, Zhijiang; Fedosejevs, Robert; Tsui, Ying Y; Van, Vien
2013-05-06
We demonstrate the fine tuning capability of femtosecond laser surface modification as a permanent trimming mechanism for silicon photonic components. Silicon microring resonators with a 15 µm radius were irradiated with single 400 nm wavelength laser pulses at varying fluences. Below the laser ablation threshold, surface amorphization of the crystalline silicon waveguides yielded a tuning rate of 20 ± 2 nm/J · cm(-2)with a minimum resonance wavelength shift of 0.10nm. Above that threshold, ablation yielded a minimum resonance shift of -1.7 nm. There was some increase in waveguide loss for both trimming mechanisms. We also demonstrated the application of the method by using it to permanently correct the resonance mismatch of a second-order microring filter.
Sensing of Substrate Vibrations in the Adult Cicada Okanagana rimosa (Hemiptera: Cicadidae).
Alt, Joscha A; Lakes-Harlan, Reinhard
2018-05-01
Detection of substrate vibrations is an evolutionarily old sensory modality and is important for predator detection as well as for intraspecific communication. In insects, substrate vibrations are detected mainly by scolopidial (chordotonal) sense organs found at different sites in the legs. Among these sense organs, the tibial subgenual organ (SGO) is one of the most sensitive sensors. The neuroanatomy and physiology of vibratory sense organs of cicadas is not well known. Here, we investigated the leg nerve by neuronal tracing and summed nerve recordings. Tracing with Neurobiotin revealed that the cicada Okanagana rimosa (Say) (Hemiptera: Cicadidae) has a femoral chordotonal organ with about 20 sensory cells and a tibial SGO with two sensory cells. Recordings from the leg nerve show that the vibrational response is broadly tuned with a threshold of about 1 m/s2 and a minimum latency of about 6 ms. The vibratory sense of cicadas might be used in predator avoidance and intraspecific communication, although no tuning to the peak frequency of the calling song (9 kHz) could be found.
Effect of ionizing radiation on the quantitative detection of Salmonella using real-time PCR
NASA Astrophysics Data System (ADS)
Lim, Sangyong; Jung, Jinwoo; Kim, Minjeong; Ryu, Sangryeol; Kim, Dongho
2008-09-01
Food irradiation is an economically viable technology for inactivating foodborne pathogens, but irradiation can mask pathogens in unhygienically prepared food. The aim of this study was to investigate the effect of irradiation treatment on the detection of Salmonella using real-time PCR. Three commercially available kits were tested, of which the InstaGene Matrix procedure was most effective in preparing template DNA from Salmonella exposed to radiation in broth culture. The minimum level of detection by real-time PCR combined with InstaGene Matrix was 3 log units of Salmonella per milliliter. However, when pure cultures of Salmonella were irradiated at 3 and 5 kGy, the cycle threshold ( CT) increased 1-1.5-fold compared to irradiation at 0 and 1 kGy. This indicated that irradiation treatment may result in an underestimation of bacterial counts due to radiation-induced DNA lesions. We also compared CT values in inoculated chicken homogenates before and after irradiation, which in this model caused a 1.3-3.3-fold underestimation of bacterial counts with respect to irradiation dose.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
NASA Astrophysics Data System (ADS)
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
van den Beld, Maaike J C; Friedrich, Alexander W; van Zanten, Evert; Reubsaet, Frans A G; Kooistra-Smid, Mirjam A M D; Rossen, John W A
2016-12-01
An inter-laboratory collaborative trial for the evaluation of diagnostics for detection and identification of Shigella species and Entero-invasive Escherichia coli (EIEC) was performed. Sixteen Medical Microbiological Laboratories (MMLs) participated. MMLs were interviewed about their diagnostic methods and a sample panel, consisting of DNA-extracts and spiked stool samples with different concentrations of Shigella flexneri, was provided to each MML. The results of the trial showed an enormous variety in culture-dependent and molecular diagnostic techniques currently used among MMLs. Despite the various molecular procedures, 15 out of 16 MMLs were able to detect Shigella species or EIEC in all the samples provided, showing that the diversity of methods has no effect on the qualitative detection of Shigella flexneri. In contrast to semi quantitative analysis, the minimum and maximum values per sample differed by approximately five threshold cycles (Ct-value) between the MMLs included in the study. This indicates that defining a uniform Ct-value cut-off for notification to health authorities is not advisable. Copyright © 2016 Elsevier B.V. All rights reserved.
The evolution of altruism in spatial threshold public goods games via an insurance mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Zhang, Chunyan
2015-05-01
The persistence of cooperation in public goods situations has become an important puzzle for researchers. This paper considers the threshold public goods games where the option of insurance is provided for players from the standpoint of diversification of risk, envisaging the possibility of multiple strategies in such scenarios. In this setting, the provision point is defined in terms of the minimum number of contributors in one threshold public goods game, below which the game fails. In the presence of risk and insurance, more contributions are motivated if (1) only cooperators can opt to be insured and thus their contribution loss in the aborted games can be (partly or full) covered by the insurance; (2) insured cooperators obtain larger compensation, at lower values of the threshold point (the required minimum number of contributors). Moreover, results suggest the dominance of insured defectors who get a better promotion by more profitable benefits from insurance. We provide results of extensive computer simulations in the realm of spatial games (random regular networks and scale-free networks here), and support this study with analytical results for well-mixed populations. Our study is expected to establish a causal link between the widespread altruistic behaviors and the existing insurance system.
A comparison of five serological tests for bovine brucellosis.
Dohoo, I R; Wright, P F; Ruckerbauer, G M; Samagh, B S; Robertson, F J; Forbes, L B
1986-01-01
Five serological assays: the buffered plate antigen test, the standard tube agglutination test, the complement fixation test, the hemolysis-in-gel test and the indirect enzyme immunoassay were diagnostically evaluated. Test data consisted of results from 1208 cattle in brucellosis-free herds, 1578 cattle in reactor herds of unknown infection status and 174 cattle from which Brucella abortus had been cultured. The complement fixation test had the highest specificity in both nonvaccinated and vaccinated cattle. The indirect enzyme immunoassay, if interpreted at a high threshold, also exhibited a high specificity in both groups of cattle. The hemolysis-in-gel test had a very high specificity when used in nonvaccinated cattle but quite a low specificity among vaccinates. With the exception of the complement fixation test, all tests had high sensitivities if interpreted at the minimum threshold. However, the sensitivities of the standard tube agglutination test and indirect enzyme immunoassay, when interpreted at high thresholds were comparable to that of the complement fixation test. A kappa statistic was used to measure the agreement between the various tests. In general the kappa statistics were quite low, suggesting that the various tests may detect different antibody isotypes. There was however, good agreement between the buffered plate antigen test and standard tube agglutination test (the two agglutination tests evaluated) and between the complement fixation test and the indirect enzyme immunoassay when interpreted at a high threshold. With the exception of the buffered plate antigen test, all tests were evaluated as confirmatory tests by estimating their specificity and sensitivity on screening-test positive samples.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:3539295
32 CFR 32.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... acceptable characteristics or minimum acceptable standards. (iv) The specific features of “brand name or... expected to exceed the simplified acquisition threshold, specifies a “brand name” product. (4) The proposed...
Changes in tropical precipitation cluster size distributions under global warming
NASA Astrophysics Data System (ADS)
Neelin, J. D.; Quinn, K. M.
2016-12-01
The total amount of precipitation integrated across a tropical storm or other precipitation feature (contiguous clusters of precipitation exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance. To establish baseline behavior in current climate, the probability distribution of cluster sizes from multiple satellite retrievals and National Center for Environmental Prediction (NCEP) reanalysis is compared to those from Coupled Model Intercomparison Project (CMIP5) models and the Geophysical Fluid Dynamics Laboratory high-resolution atmospheric model (HIRAM-360 and -180). With the caveat that a minimum rain rate threshold is important in the models (which tend to overproduce low rain rates), the models agree well with observations in leading properties. In particular, scale-free power law ranges in which the probability drops slowly with increasing cluster size are well modeled, followed by a rapid drop in probability of the largest clusters above a cutoff scale. Under the RCP 8.5 global warming scenario, the models indicate substantial increases in probability (up to an order of magnitude) of the largest clusters by the end of century. For models with continuous time series of high resolution output, there is substantial spread on when these probability increases for the largest precipitation clusters should be detectable, ranging from detectable within the observational period to statistically significant trends emerging only in the second half of the century. Examination of NCEP reanalysis and SSMI/SSMIS series of satellite retrievals from 1979 to present does not yield reliable evidence of trends at this time. The results suggest improvements in inter-satellite calibration of the SSMI/SSMIS retrievals could aid future detection.
Infrared imaging based hyperventilation monitoring through respiration rate estimation
NASA Astrophysics Data System (ADS)
Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna
2016-07-01
A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.
Stormwater plume detection by MODIS imagery in the southern California coastal ocean
Nezlin, N.P.; DiGiacomo, P.M.; Diehl, D.W.; Jones, B.H.; Johnson, S.C.; Mengel, M.J.; Reifel, K.M.; Warrick, J.A.; Wang, M.
2008-01-01
Stormwater plumes in the southern California coastal ocean were detected by MODIS-Aqua satellite imagery and compared to ship-based data on surface salinity and fecal indicator bacterial (FIB) counts collected during the Bight'03 Regional Water Quality Program surveys in February-March of 2004 and 2005. MODIS imagery was processed using a combined near-infrared/shortwave-infrared (NIR-SWIR) atmospheric correction method, which substantially improved normalized water-leaving radiation (nLw) optical spectra in coastal waters with high turbidity. Plumes were detected using a minimum-distance supervised classification method based on nLw spectra averaged within the training areas, defined as circular zones of 1.5-5.0-km radii around field stations with a surface salinity of S 33.0 ('ocean'). The plume optical signatures (i.e., the nLw differences between 'plume' and 'ocean') were most evident during the first 2 days after the rainstorms. To assess the accuracy of plume detection, stations were classified into 'plume' and 'ocean' using two criteria: (1) 'plume' included the stations with salinity below a certain threshold estimated from the maximum accuracy of plume detection; and (2) FIB counts in 'plume' exceeded the California State Water Board standards. The salinity threshold between 'plume' and 'ocean' was estimated as 32.2. The total accuracy of plume detection in terms of surface salinity was not high (68% on average), seemingly because of imperfect correlation between plume salinity and ocean color. The accuracy of plume detection in terms of FIB exceedances was even lower (64% on average), resulting from low correlation between ocean color and bacterial contamination. Nevertheless, satellite imagery was shown to be a useful tool for the estimation of the extent of potentially polluted plumes, which was hardly achievable by direct sampling methods (in particular, because the grids of ship-based stations covered only small parts of the plumes detected via synoptic MODIS imagery). In most southern California coastal areas, the zones of bacterial contamination were much smaller than the areas of turbid plumes; an exception was the plume of the Tijuana River, where the zone of bacterial contamination was comparable with the zone of plume detected by ocean color. ?? 2008 Elsevier Ltd.
Stormwater plume detection by MODIS imagery in the southern California coastal ocean
NASA Astrophysics Data System (ADS)
Nezlin, Nikolay P.; DiGiacomo, Paul M.; Diehl, Dario W.; Jones, Burton H.; Johnson, Scott C.; Mengel, Michael J.; Reifel, Kristen M.; Warrick, Jonathan A.; Wang, Menghua
2008-10-01
Stormwater plumes in the southern California coastal ocean were detected by MODIS-Aqua satellite imagery and compared to ship-based data on surface salinity and fecal indicator bacterial (FIB) counts collected during the Bight'03 Regional Water Quality Program surveys in February-March of 2004 and 2005. MODIS imagery was processed using a combined near-infrared/shortwave-infrared (NIR-SWIR) atmospheric correction method, which substantially improved normalized water-leaving radiation (nLw) optical spectra in coastal waters with high turbidity. Plumes were detected using a minimum-distance supervised classification method based on nLw spectra averaged within the training areas, defined as circular zones of 1.5-5.0-km radii around field stations with a surface salinity of S < 32.0 ("plume") and S > 33.0 ("ocean"). The plume optical signatures (i.e., the nLw differences between "plume" and "ocean") were most evident during the first 2 days after the rainstorms. To assess the accuracy of plume detection, stations were classified into "plume" and "ocean" using two criteria: (1) "plume" included the stations with salinity below a certain threshold estimated from the maximum accuracy of plume detection; and (2) FIB counts in "plume" exceeded the California State Water Board standards. The salinity threshold between "plume" and "ocean" was estimated as 32.2. The total accuracy of plume detection in terms of surface salinity was not high (68% on average), seemingly because of imperfect correlation between plume salinity and ocean color. The accuracy of plume detection in terms of FIB exceedances was even lower (64% on average), resulting from low correlation between ocean color and bacterial contamination. Nevertheless, satellite imagery was shown to be a useful tool for the estimation of the extent of potentially polluted plumes, which was hardly achievable by direct sampling methods (in particular, because the grids of ship-based stations covered only small parts of the plumes detected via synoptic MODIS imagery). In most southern California coastal areas, the zones of bacterial contamination were much smaller than the areas of turbid plumes; an exception was the plume of the Tijuana River, where the zone of bacterial contamination was comparable with the zone of plume detected by ocean color.
NASA Astrophysics Data System (ADS)
Yao, Yuangen; Ma, Chengzhang; Wang, Canjun; Yi, Ming; Gui, Rong
2018-02-01
We study the effects of multiplicative and additive cross-correlated sine-Wiener (CCSW) noises on the performance of sub-threshold periodic signal detection in the FitzHugh-Nagumo (FHN) neuron by calculating Fourier coefficients Q for measuring synchronization between sub-threshold input signal and the response of system. CCSW noises-induced transitions of electrical activity in the FHN neuron model can be observed. Moreover, the performance of sub-threshold periodic signal detection is achieved at moderate noise strength, cross-correlation time and cross-correlation strength of CCSW noises, which indicate the occurrence of CCSW noises-induced stochastic resonance. Furthermore, the performance of sub-threshold signal detection is strongly sensitive to cross-correlation time of CCSW noises. Therefore, the performance can be effectively controlled by regulating cross-correlation time of CCSW noises. These results provide a possible mechanism for amplifying or detecting the sub-threshold signal in the nervous system.
Effects of visual erotic stimulation on vibrotactile detection thresholds in men.
Jiao, Chuanshu; Knight, Peter K; Weerakoon, Patricia; Turman, A Bulent
2007-12-01
This study examined the effects of sexual arousal on vibration detection thresholds in the right index finger of 30 healthy, heterosexual males who reported no sexual dysfunction. Vibrotactile detection thresholds at frequencies of 30, 60, and 100 Hz were assessed before and after watching erotic and control videos using a forced-choice, staircase method. A mechanical stimulator was used to produce the vibratory stimulus. Results were analyzed using repeated measures analysis of variance. After watching the erotic video, the vibrotactile detection thresholds at 30, 60, and 100 Hz were significantly reduced (p < .01). No changes in thresholds were detected at any frequency following exposure to the non-erotic stimulus. The results show that sexual arousal resulted in an increase in vibrotactile sensitivity to low frequency stimuli in the index finger of sexually functional men.
Biver, Marc; Filella, Montserrat
2016-05-03
The toxicity of Cd being well established and that of Te suspected, the bulk, surface-normalized steady-state dissolution rates of two industrially important binary tellurides-polycrystalline cadmium and bismuth tellurides- were studied over the pH range 3-11, at various temperatures (25-70 °C) and dissolved oxygen concentrations (0-100% O2 in the gas phase). The behavior of both tellurides is strikingly different. The dissolution rates of CdTe monotonically decreased with increasing pH, the trend becoming more pronounced with increasing temperature. Activation energies were of the order of magnitude associated with surface controlled processes; they decreased with decreasing acidity. At pH 7, the CdTe dissolution rate increased linearly with dissolved oxygen. In anoxic solution, CdTe dissolved at a finite rate. In contrast, the dissolution rate of Bi2Te3 passed through a minimum at pH 5.3. The activation energy had a maximum in the rate minimum at pH 5.3 and fell below the threshold for diffusion control at pH 11. No oxygen dependence was detected. Bi2Te3 dissolves much more slowly than CdTe; from one to more than 3.5 orders of magnitude in the Bi2Te3 rate minimum. Both will readily dissolve under long-term landfill deposition conditions but comparatively slowly.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Detection of hail signatures from single-polarization C-band radar reflectivity
NASA Astrophysics Data System (ADS)
Kunz, Michael; Kugel, Petra I. S.
2015-02-01
Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.
A differentially amplified motion in the ear for near-threshold sound detection
Chen, Fangyi; Zha, Dingjun; Fridberger, Anders; Zheng, Jiefu; Choudhury, Niloy; Jacques, Steven L.; Wang, Ruikang K.; Shi, Xiaorui; Nuttall, Alfred L.
2011-01-01
The ear is a remarkably sensitive pressure fluctuation detector. In guinea pigs, behavioral measurements indicate a minimum detectable sound pressure of ~20 μPa at 16 kHz. Such faint sounds produce 0.1 nm basilar membrane displacements, a distance smaller than conformational transitions in ion channels. It seems that noise within the auditory system would swamp such tiny motions, making weak sounds imperceptible. Here, a new mechanism contributing to a resolution of this problem is proposed and validated through direct measurement. We hypothesize that vibration at the apical end of hair cells is enhanced compared to the commonly measured basilar membrane side. Using in vivo optical coherence tomography, we demonstrated that apical-side vibrations peak at a higher frequency, had different timing, and were enhanced compared to the basilar membrane. These effects depend nonlinearly on the stimulus level. The timing difference and enhancement are important for explaining how the noise problem is circumvented. PMID:21602821
NASA Technical Reports Server (NTRS)
Siegmund, Oswald H. W.; Everman, E.; Vallerga, J. V.; Sokolowski, J.; Lampton, M.
1987-01-01
The quantum detection efficiency (QDE) of potassium bromide as a photocathode applied directly to the surface of a microchannel plate over the 250-1600 A wavelength range has been measured. The contributions of the photocathode material in the channels and on the interchannel web to the QDE have been determined. Two broad peaks in the QDE centered at about 450 and about 1050 A are apparent, the former with about 50 percent peak QDE and the latter with about 40 percent peak QDE. The photoelectric threshold is observed at about 1600 A, and there is a narrow QDE minimum at about 750 A which correlates with 2X the band gap energy for KBr. The angular variation of the QDE from 0 to 40 deg to the channnel axis has also been examined. The stability of Kbr with time is shown to be good with no significant degradation of QDE at wavelengths below 1216 A over a 15-day period in air.
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
An Evaluation of Performance Thresholds in Nursing Home Pay-for-Performance.
Werner, Rachel M; Skira, Meghan; Konetzka, R Tamara
2016-12-01
Performance thresholds are commonly used in pay-for-performance (P4P) incentives, where providers receive a bonus payment for achieving a prespecified target threshold but may produce discontinuous incentives, with providers just below the threshold having the strongest incentive to improve and providers either far below or above the threshold having little incentive. We investigate the effect of performance thresholds on provider response in the setting of nursing home P4P. The Minimum Data Set (MDS) and Online Survey, Certification, and Reporting (OSCAR) datasets. Difference-in-differences design to test for changes in nursing home performance in three states that implemented threshold-based P4P (Colorado, Georgia, and Oklahoma) versus three comparator states (Arizona, Tennessee, and Arkansas) between 2006 and 2009. We find that those farthest below the threshold (i.e., the worst-performing nursing homes) had the largest improvements under threshold-based P4P while those farthest above the threshold worsened. This effect did not vary with the percentage of Medicaid residents in a nursing home. Threshold-based P4P may provide perverse incentives for nursing homes above the performance threshold, but we do not find evidence to support concerns about the effects of performance thresholds on low-performing nursing homes. © Health Research and Educational Trust.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
40 CFR 30.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... characteristics or minimum acceptable standards. (iv) The specific features of “brand name or equal” descriptions..., specifies a “brand name” product. (4) The proposed award over the small purchase threshold is to be awarded...
Nonlinear distortion analysis for single heterojunction GaAs HEMT with frequency and temperature
NASA Astrophysics Data System (ADS)
Alim, Mohammad A.; Ali, Mayahsa M.; Rezazadeh, Ali A.
2018-07-01
Nonlinearity analysis using two-tone intermodulation distortion (IMD) technique for 0.5 μm gate-length AlGaAs/GaAs based high electron mobility transistor have been investigated based on biasing conditions, input power, frequency and temperature. The outcomes indicate a significant modification on the output IMD power and as well as the minimum distortion level. The input IMD power effects the output current and subsequently the threshold voltage reduces, resulting to an increment in the output IMD power. Both frequency and temperature reduces the magnitude of the output IMDs. In addition, the threshold voltage response with temperature alters the notch point of the nonlinear output IMD’s accordingly. The aforementioned investigation will help the circuit designers to evaluate the best biasing option in terms of minimum distortion, maximum gain for future design optimizations.
Generalized minimum dominating set and application in automatic text summarization
NASA Astrophysics Data System (ADS)
Xu, Yi-Zhi; Zhou, Hai-Jun
2016-03-01
For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.
Beam loss detection system in the arcs of the LHC
NASA Astrophysics Data System (ADS)
Arauzo, A.; Bovet, C.
2000-11-01
Over the whole circumference of the LHC, Beam Loss Monitors (BLM) will be needed for a continuous surveillance of fast and slow beam losses. In this paper, the location of the BLMs set outside the magnet cryostats in the arcs is proposed. In order to know the number of protons lost on the beam screen, the sensitivity of each BLM has been computed using the program GEANT 3.21, which generates the shower inside the cryostat. The material and the magnetic fields have been described thoroughly in 3-D and the simulation results show the best locations for 6 BLMs needed around each quadrupole. The number of minimum ionizing particles received for each lost proton serves to define local thresholds to dump the beam when the losses are menacing to quench a magnet.
Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.
Zhang, Jianming; Sclaroff, Stan
2016-05-01
We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.
Ito, Takao; Suzaki, Koichi
2017-01-01
Phytoplasmas and Xylella spp. are bacteria that cause many economically important plant diseases worldwide. TaqMan probe-based quantitative real-time polymerase chain reaction (qPCR) assays have been utilized to universally detect phytoplasmas or Xylella fastidiosa. To develop a superior universal qPCR method, we used a dual priming oligonucleotide (DPO) with two annealing sites as a reverse primer to target the well-conserved bacterial 16S rDNA. The new qPCR assays universally detected various species of phytoplasmas and subspecies of X. fastidiosa as well as Xylella taiwanensis, and generally showed superior threshold cycle values when amplifying specific or non-specific products compared to current universal qPCR assays. The proposed qPCR assays were integrated to develop a multiplex qPCR assay that simultaneously detected phytoplasmas, Xylella spp., and an internal plant DNA positive control within 1 hour. This assay could detect a minimum of ten bacterial cells and was compatible with crude extractions used in the rapid screening of various plants. The amplicons were of sufficient lengths to be directly sequenced for preliminary identification, and the primers could be used in universal conventional PCR assays. Additionally, reverse DPO primers can be utilized to improve other probe-based qPCR assays.
Suzaki, Koichi
2017-01-01
Phytoplasmas and Xylella spp. are bacteria that cause many economically important plant diseases worldwide. TaqMan probe-based quantitative real-time polymerase chain reaction (qPCR) assays have been utilized to universally detect phytoplasmas or Xylella fastidiosa. To develop a superior universal qPCR method, we used a dual priming oligonucleotide (DPO) with two annealing sites as a reverse primer to target the well-conserved bacterial 16S rDNA. The new qPCR assays universally detected various species of phytoplasmas and subspecies of X. fastidiosa as well as Xylella taiwanensis, and generally showed superior threshold cycle values when amplifying specific or non-specific products compared to current universal qPCR assays. The proposed qPCR assays were integrated to develop a multiplex qPCR assay that simultaneously detected phytoplasmas, Xylella spp., and an internal plant DNA positive control within 1 hour. This assay could detect a minimum of ten bacterial cells and was compatible with crude extractions used in the rapid screening of various plants. The amplicons were of sufficient lengths to be directly sequenced for preliminary identification, and the primers could be used in universal conventional PCR assays. Additionally, reverse DPO primers can be utilized to improve other probe-based qPCR assays. PMID:28957362
Rainfall thresholds as a landslide indicator for engineered slopes on the Irish Rail network
NASA Astrophysics Data System (ADS)
Martinović, Karlo; Gavin, Kenneth; Reale, Cormac; Mangan, Cathal
2018-04-01
Rainfall thresholds express the minimum levels of rainfall that need to be reached or exceeded in order for landslides to occur in a particular area. They are a common tool in expressing the temporal portion of landslide hazard analysis. Numerous rainfall thresholds have been developed for different areas worldwide, however none of these are focused on landslides occurring on the engineered slopes on transport infrastructure networks. This paper uses empirical method to develop the rainfall thresholds for landslides on the Irish Rail network earthworks. For comparison, rainfall thresholds are also developed for natural terrain in Ireland. The results show that particular thresholds involving relatively low rainfall intensities are applicable for Ireland, owing to the specific climate. Furthermore, the comparison shows that rainfall thresholds for engineered slopes are lower than those for landslides occurring on the natural terrain. This has severe implications as it indicates that there is a significant risk involved when using generic weather alerts (developed largely for natural terrain) for infrastructure management, and showcases the need for developing railway and road specific rainfall thresholds for landslides.
Salicylate-Induced Hearing Loss and Gap Detection Deficits in Rats
Radziwon, Kelly E.; Stolzberg, Daniel J.; Urban, Maxwell E.; Bowler, Rachael A.; Salvi, Richard J.
2015-01-01
To test the “tinnitus gap-filling” hypothesis in an animal psychoacoustic paradigm, rats were tested using a go/no-go operant gap detection task in which silent intervals of various durations were embedded within a continuous noise. Gap detection thresholds were measured before and after treatment with a dose of sodium salicylate (200 mg/kg) that reliably induces tinnitus in rats. Noise-burst detection thresholds were also measured to document the amount of hearing loss and aid in interpreting the gap detection results. As in the previous human psychophysical experiments, salicylate had little or no effect on gap thresholds measured in broadband noise presented at high-stimulus levels (30–60 dB SPL); gap detection thresholds were always 10 ms or less. Salicylate also did not affect gap thresholds presented in narrowband noise at 60 dB SPL. Therefore, rats treated with a dose of salicylate that reliably induces tinnitus have no difficulty detecting silent gaps as long as the noise in which they are embedded is clearly audible. PMID:25750635
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Ertl, Peter; Kruse, Annika; Tilp, Markus
2016-10-01
The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Duncan, John M A; Dash, Jadunandan; Atkinson, Peter M
2015-04-01
Remote sensing-derived wheat crop yield-climate models were developed to highlight the impact of temperature variation during thermo-sensitive periods (anthesis and grain-filling; TSP) of wheat crop development. Specific questions addressed are: can the impact of temperature variation occurring during the TSP on wheat crop yield be detected using remote sensing data and what is the impact? Do crop critical temperature thresholds during TSP exist in real world cropping landscapes? These questions are tested in one of the world's major wheat breadbaskets of Punjab and Haryana, north-west India. Warming average minimum temperatures during the TSP had a greater negative impact on wheat crop yield than warming maximum temperatures. Warming minimum and maximum temperatures during the TSP explain a greater amount of variation in wheat crop yield than average growing season temperature. In complex real world cereal croplands there was a variable yield response to critical temperature threshold exceedance, specifically a more pronounced negative impact on wheat yield with increased warming events above 35 °C. The negative impact of warming increases with a later start-of-season suggesting earlier sowing can reduce wheat crop exposure harmful temperatures. However, even earlier sown wheat experienced temperature-induced yield losses, which, when viewed in the context of projected warming up to 2100 indicates adaptive responses should focus on increasing wheat tolerance to heat. This study shows it is possible to capture the impacts of temperature variation during the TSP on wheat crop yield in real world cropping landscapes using remote sensing data; this has important implications for monitoring the impact of climate change, variation and heat extremes on wheat croplands. © 2014 John Wiley & Sons Ltd.
Neurometric amplitude-modulation detection threshold in the guinea-pig ventral cochlear nucleus
Sayles, Mark; Füllgrabe, Christian; Winter, Ian M
2013-01-01
Amplitude modulation (AM) is a pervasive feature of natural sounds. Neural detection and processing of modulation cues is behaviourally important across species. Although most ecologically relevant sounds are not fully modulated, physiological studies have usually concentrated on fully modulated (100% modulation depth) signals. Psychoacoustic experiments mainly operate at low modulation depths, around detection threshold (∼5% AM). We presented sinusoidal amplitude-modulated tones, systematically varying modulation depth between zero and 100%, at a range of modulation frequencies, to anaesthetised guinea-pigs while recording spikes from neurons in the ventral cochlear nucleus (VCN). The cochlear nucleus is the site of the first synapse in the central auditory system. At this locus significant signal processing occurs with respect to representation of AM signals. Spike trains were analysed in terms of the vector strength of spike synchrony to the amplitude envelope. Neurons showed either low-pass or band-pass temporal modulation transfer functions, with the proportion of band-pass responses increasing with increasing sound level. The proportion of units showing a band-pass response varies with unit type: sustained chopper (CS) > transient chopper (CT) > primary-like (PL). Spike synchrony increased with increasing modulation depth. At the lowest modulation depth (6%), significant spike synchrony was only observed near to the unit's best modulation frequency for all unit types tested. Modulation tuning therefore became sharper with decreasing modulation depth. AM detection threshold was calculated for each individual unit as a function of modulation frequency. Chopper units have significantly better AM detection thresholds than do primary-like units. AM detection threshold is significantly worse at 40 dB vs. 10 dB above pure-tone spike rate threshold. Mean modulation detection thresholds for sounds 10 dB above pure-tone spike rate threshold at best modulation frequency are (95% CI) 11.6% (10.0–13.1) for PL units, 9.8% (8.2–11.5) for CT units, and 10.8% (8.4–13.2) for CS units. The most sensitive guinea-pig VCN single unit AM detection thresholds are similar to human psychophysical performance (∼3% AM), while the mean neurometric thresholds approach whole animal behavioural performance (∼10% AM). PMID:23629508
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Effect of strong fragrance on olfactory detection threshold.
Fasunla, Ayotunde James; Douglas, David Dayo; Adeosun, Aderemi Adeleke; Steinbach, Silke; Nwaorgu, Onyekwere George Benjamin
2014-09-01
To assess the olfactory threshold of healthy volunteers at the University College Hospital, Ibadan and to investigate the effect of perfume on their olfactory detection thresholds. A quasi-experimental study on olfactory detection thresholds of healthy volunteers from September 2013 to November 2013. Tertiary health institution. A structured questionniare was administered to the participants in order to obtain information on sociodemographics, occupation, ability to perceive smell, use of perfume, effects of perfume on appetite and self-confidence, history of allergy, and previous nasal surgery. Participants subjectively rated their olfactory performance. Subsequently, they had olfactory detection threshold testing done at baseline and after exposure to perfume with varied concentrations of n-butanol in a forced triple response and staircase fashion. Healthy volunteers, 37 males and 63 females, were evaluated. Their ages ranged from 19 to 59 years with a mean of 31 years ± 8. Subjectively, 94% of the participants had excellent olfactory function. In the pre-exposure forced triple response, 88% were able to detect the odor at ≤.25 mmol/l concentration while in the post-exposure forced triple response, only 66% were able to detect the odor at ≤.25 mmol/l concentration. There is also a statistical significant difference in the olfactory detection threshold score between the pre-exposure and post-exposure period in the participants (P < .05). Use of strong fragrances affects the olfactory detection threshold. Therefore patients and clinicians should be aware of this and its effects on the outcome of test of olfaction. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
P.S. Homann; B.T. Bormann; J.R. Boyle; R.L. Darbyshire; R. Bigley
2008-01-01
Detecting changes in forest soil C and N is vital to the study of global budgets and long-term ecosystem productivity. Identifying differences among land-use practices may guide future management. Our objective was to determine the relation of minimum detectable changes (MDCs) and minimum detectable differences between treatments (MDDs) to soil C and N variability at...
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Navy Pier roundabout, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 2,300 cpm to a maximum of 4,300cpm unshielded.
305 E. Erie Street, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,900 cpm to a maximum of 3,900cpm unshielded.
215 E. Grand Ave, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,300 cpm to a maximum of 3,500cpm shielded.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
A novel gene network inference algorithm using predictive minimum description length approach.
Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang
2010-05-28
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size.
NASA Astrophysics Data System (ADS)
Bellingeri, Michele; Agliari, Elena; Cassi, Davide
2015-10-01
The best strategy to immunize a complex network is usually evaluated in terms of the percolation threshold, i.e. the number of vaccine doses which make the largest connected cluster (LCC) vanish. The strategy inducing the minimum percolation threshold represents the optimal way to immunize the network. Here we show that the efficacy of the immunization strategies can change during the immunization process. This means that, if the number of doses is limited, the best strategy is not necessarily the one leading to the smallest percolation threshold. This outcome should warn about the adoption of global measures in order to evaluate the best immunization strategy.
The Threshold Shortest Path Interdiction Problem for Critical Infrastructure Resilience Analysis
2017-09-01
being pushed over the minimum designated threshold. 1.4 Motivation A simple setting to motivate this research is the “30 minutes or it’s free” guarantee...parallel network structure in Fig. 4.4 is simple in design , yet shows a relatively high resilience when compared to the other networks in general. The high...United States Naval Academy, 2002 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH
Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging
NASA Technical Reports Server (NTRS)
Olczak, Eugene G (Inventor)
2011-01-01
An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Simple algorithms for digital pulse-shape discrimination with liquid scintillation detectors
NASA Astrophysics Data System (ADS)
Alharbi, T.
2015-01-01
The development of compact, battery-powered digital liquid scintillation neutron detection systems for field applications requires digital pulse processing (DPP) algorithms with minimum computational overhead. To meet this demand, two DPP algorithms for the discrimination of neutron and γ-rays with liquid scintillation detectors were developed and examined by using a NE213 liquid scintillation detector in a mixed radiation field. The first algorithm is based on the relation between the amplitude of a current pulse at the output of a photomultiplier tube and the amount of charge contained in the pulse. A figure-of-merit (FOM) value of 0.98 with 450 keVee (electron equivalent energy) energy threshold was achieved with this method when pulses were sampled at 250 MSample/s and with 8-bit resolution. Compared to the similar method of charge-comparison this method requires only a single integration window, thereby reducing the amount of computations by approximately 40%. The second approach is a digital version of the trailing-edge constant-fraction discrimination method. A FOM value of 0.84 with an energy threshold of 450 keVee was achieved with this method. In comparison with the similar method of rise-time discrimination this method requires a single time pick-off, thereby reducing the amount of computations by approximately 50%. The algorithms described in this work are useful for developing portable detection systems for applications such as homeland security, radiation dosimetry and environmental monitoring.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Liu, Fangming; Zhang, Honglian; Wu, Zhenhua; Dong, Haidao; Zhou, Lin; Yang, Dawei; Ge, Yuqing; Jia, Chunping; Liu, Huiying; Jin, Qinghui; Zhao, Jianlong; Zhang, Qiqing; Mao, Hongju
2016-12-01
Carcinoembryonic antigen (CEA) is an important biomarker in cancer diagnosis. Here, we present an efficient, selective lateral-flow immunoassay (LFIA) based on magnetic nanoparticles (MNPs) for in situ sensitive and accurate point-of-care detection of CEA. Signal amplification mechanism involved linking of detection MNPs with signal MNPs through biotin-modified single-stranded DNA (ssDNA) and streptavidin. To verify the effectiveness of this modified LFIA system, the sensitivity and specificity were evaluated. Sensitivity evaluation showed a broad detection range of 0.25-1000ng/ml for CEA protein by the modified LFIA, and the limit of detection (LOD) of the modified LFIA was 0.25ng/ml, thus producing significant increase in detection threshold compared with the traditional LFIA. The modified LFIA could selectively recognize CEA in presence of several interfering proteins. In addition, this newly developed assay was applied for quantitative detection of CEA in human serum specimens collected from 10 randomly selected patients. The modified LFIA system detected minimum 0.27ng/ml of CEA concentration in serum samples. The results were consistent with the clinical data obtained using commercial electrochemiluminescence immunoassay (ECLIA) (p<0.01). In conclusion, the MNPs based LFIA system not only demonstrated enhanced signal to noise ratio, it also detected CEA with higher sensitivity and selectivity, and thus has great potential to be commercially applied as a sensitive tumor marker filtration system. Copyright © 2016 Elsevier B.V. All rights reserved.
Hammond, Edward R.; Crum, Rosa M.; Treisman, Glenn J.; Mehta, Shruti H.; Clifford, David B.; Ellis, Ronald J.; Gelman, Benjamin B.; Grant, Igor; Letendre, Scott L.; Marra, Christina M; Morgello, Susan; Simpson, David M.; McArthur, Justin C.
2016-01-01
Major depressive disorder is the most common neuropsychiatric complication in human immunodeficiency virus (HIV) infections and is associated with worse clinical outcomes. We determined if detectable cerebrospinal fluid (CSF) HIV ribonucleic acid (RNA) at threshold ≥50 copies/ml is associated with increased risk of depression. The CNS HIV Anti-Retroviral Therapy Effects Research (CHARTER) cohort is a six-center US-based prospective cohort with bi-annual follow-up 674 participants. We fit linear mixed models (N=233) and discrete-time survival models (N=154; 832 observations), to evaluate trajectories of Beck Depression Inventory (BDI) II scores, and the incidence of new-onset moderate-to-severe depressive symptoms (BDI≥17) among participants, on combination antiretroviral therapy (cART), who were free of depression at study entry, and received a minimum of three CSF examinations over 2,496 person-months follow-up. Detectable CSF HIV RNA (threshold ≥50 copies/ml) at any visit was associated with a 4.7-fold increase in new-onset depression at subsequent visits adjusted for plasma HIV RNA and treatment adherence; hazard ratio (HR)=4.76, (95% CI: 1.58–14.3); P=0.006. Depression (BDI) scores were 2.53 points higher (95% CI: 0.47–4.60; P=0.02) over 6 months if CSF HIV RNA was detectable at a prior study visit in fully adjusted models including age, sex, race, education, plasma HIV RNA, duration and adherence of cART, and lifetime depression diagnosis by DSM-IV criteria. Persistent CSF but not plasma HIV RNA, is associated with an increased risk for new-onset depression. Further research evaluating the role of immune activation and inflammatory markers may improve our understanding of this association. PMID:26727907
Almoqbel, Fahad M; Irving, Elizabeth L; Leat, Susan J
2017-08-01
The purpose of this study was to investigate the development of visual acuity (VA) and contrast sensitivity in children as measured with objective (sweep visually evoked potential) and subjective, psychophysical techniques, including signal detection theory (SDT), which attempts to control for differences in criterion or behavior between adults and children. Furthermore, this study examines the possibility of applying SDT methods with children. Visual acuity and contrast thresholds were measured in 12 children 6 to 7 years old, 10 children 8 to 9 years old, 10 children 10 to 12 years old, and 16 adults. For sweep visually evoked potential measurements, spatial frequency was swept from 1 to 40 cpd to measure VA, and contrast of sine-wave gratings (1 or 8 cpd) was swept from 0.33 to 30% to measure contrast thresholds. For psychophysical measurements, VA and contrast thresholds (1 or 8 cpd) were measured using a temporal two-alternative forced-choice staircase procedure and also with a yes-no SDT procedure. Optotype (logMAR [log of the minimum angle of resolution]) VA was also measured. The results of the various procedures were in agreement showing that there are age-related changes in threshold values and logMAR VA after the age of 6 years and that these visual functions do not become adult-like until the age of 8 to 9 years at the earliest. It was also found that children can participate in SDT procedures and do show differences in criterion compared with adults in psychophysical testing. These findings confirm a slightly later development of VA and contrast sensitivity (8 years or older) and indicate the importance of using SDT or forced-choice procedures in any developmental study to attempt to overcome the effect of criterion in children.
301-15 East Illinois St, December 2017, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 800 cpm to a maximum of 2,800cpm shielded.
335 E. Erie, January 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 4,900 cpm to a maximum of 10,200cpm unshielded.
215 E. Grand Ave, December 2015, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,800 cpm to a maximum of 7,200cpm unshielded.
243 E. Ontario Street - Water, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,400 cpm to a maximum of 2,600cpm shielded.
224 E Ontario, February 2015, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,200 cpm to a maximum of 7,500cpm unshielded.
54-63 East Ohio St., Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 2,000 cpm to a maximum of 2,300cpm shielded.
225 E. Grand Ave, Septmber 2015, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,300 cpm to a maximum of 9,700cpm unshielded.
243 E. Ontario Street - Sewer, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,100 cpm to a maximum of 3,600cpm shielded.
226-237 E. Ontario, April 2016, Lindsay Light Radiological Survey
Field gamma measurements did not exceed the field instrument threshold equivalent to the USEPA removal actionlevel and ranged from a minimum of 6,000 cpm to a maximum of approximately 9,000 cpm unshielded.
228 E. Ontario, February 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,400 cpm to a maximum of 10,000cpm unshielded.
330-334 E. Ontario, April 2016, Lindsay Light Radiological Survey
Field gamma measurement did not exceed the field instrument threshold equivalent to the USEPA removal actionlevel and ranged from a minimum of 5,700 cpm to a maximum of approximately 13,100 cpm unshielded.
Partial photoionization cross sections of NH4 and H3O Rydberg radicals
NASA Astrophysics Data System (ADS)
Velasco, A. M.; Lavín, C.; Martín, I.; Melin, J.; Ortiz, J. V.
2009-07-01
Photoionization cross sections for various Rydberg series that correspond to ionization channels of ammonium and oxonium Rydberg radicals from the outermost, occupied orbitals of their respective ground states are reported. These properties are known to be relevant in photoelectron dynamics studies. For the present calculations, the molecular-adapted quantum defect orbital method has been employed. A Cooper minimum has been found in the 3sa1-kpt2 Rydberg channel of NH4 beyond the ionization threshold, which provides the main contribution to the photoionization of this radical. However, no net minimum is found in the partial cross section of H3O despite the presence of minima in the 3sa1-kpe and 3sa1-kpa1 Rydberg channels. The complete oscillator strength distributions spanning the discrete and continuous regions of both radicals exhibit the expected continuity across the ionization threshold.
Metastable Features of Economic Networks and Responses to Exogenous Shocks
Hosseiny, Ali; Bahrami, Mohammad; Palestrini, Antonio; Gallegati, Mauro
2016-01-01
It is well known that a network structure plays an important role in addressing a collective behavior. In this paper we study a network of firms and corporations for addressing metastable features in an Ising based model. In our model we observe that if in a recession the government imposes a demand shock to stimulate the network, metastable features shape its response. Actually we find that there exists a minimum bound where any demand shock with a size below it is unable to trigger the market out of recession. We then investigate the impact of network characteristics on this minimum bound. We surprisingly observe that in a Watts-Strogatz network, although the minimum bound depends on the average of the degrees, when translated into the language of economics, such a bound is independent of the average degrees. This bound is about 0.44ΔGDP, where ΔGDP is the gap of GDP between recession and expansion. We examine our suggestions for the cases of the United States and the European Union in the recent recession, and compare them with the imposed stimulations. While the stimulation in the US has been above our threshold, in the EU it has been far below our threshold. Beside providing a minimum bound for a successful stimulation, our study on the metastable features suggests that in the time of crisis there is a “golden time passage” in which the minimum bound for successful stimulation can be much lower. Hence, our study strongly suggests stimulations to arise within this time passage. PMID:27706166
Estimation of Rain Intensity Spectra over the Continental US Using Ground Radar-Gauge Measurements
NASA Technical Reports Server (NTRS)
Lin, Xin; Hou, Arthur Y.
2013-01-01
A high-resolution surface rainfall product is used to estimate rain characteristics over the continental US as a function of rain intensity. By defining each data at 4-km horizontal resolutions and 1-h temporal resolutions as an individual precipitating/nonprecipitating sample, statistics of rain occurrence and rain volume including their geographical and seasonal variations are documented. Quantitative estimations are also conducted to evaluate the impact of missing light rain events due to satellite sensors' detection capabilities. It is found that statistics of rain characteristics have large seasonal and geographical variations across the continental US. Although heavy rain events (> 10 mm/hr.) only occupy 2.6% of total rain occurrence, they may contribute to 27% of total rain volume. Light rain events (< 1.0 mm/hr.), occurring much more frequently (65%) than heavy rain events, can also make important contributions (15%) to the total rain volume. For minimum detectable rain rates setting at 0.5 and 0.2 mm/hr which are close to sensitivities of the current and future space-borne precipitation radars, there are about 43% and 11% of total rain occurrence below these thresholds, and they respectively represent 7% and 0.8% of total rain volume. For passive microwave sensors with their rain pixel sizes ranging from 14 to 16 km and the minimum detectable rain rates around 1 mm/hr., the missed light rain events may account for 70% of train occurrence and 16% of rain volume. Statistics of rain characteristics are also examined on domains with different temporal and spatial resolutions. Current issues in estimates of rain characteristics from satellite measurements and model outputs are discussed.
Wahl, Patrick; Zwingmann, Lukas; Manunzio, Christian; Wolf, Jacob; Bloch, Wilhelm
2018-05-18
This study evaluated the accuracy of the lactate minimum test, in comparison to a graded-exercise test and established threshold concepts (OBLA and mDmax) to determine running speed at maximal lactate steady state. Eighteen subjects performed a lactate minimum test, a graded-exercise test (2.4 m·s -1 start,+0.4 m·s -1 every 5 min) and 2 or more constant-speed tests of 30 min to determine running speed at maximal lactate steady state. The lactate minimum test consisted of an initial lactate priming segment, followed by a short recovery phase. Afterwards, the initial load of the subsequent incremental segment was individually determined and was increased by 0.1 m·s -1 every 120 s. Lactate minimum was determined by the lowest measured value (LM abs ) and by a third-order polynomial (LM pol ). The mean difference to maximal lactate steady state was+0.01±0.14 m·s -1 (LM abs ), 0.04±0.15 m·s -1 (LM pol ), -0.06±0.31 m·s 1 (OBLA) and -0.08±0.21 m·s 1 (mDmax). The intraclass correlation coefficient (ICC) between running velocity at maximal lactate steady state and LM abs was highest (ICC=0.964), followed by LM pol (ICC=0.956), mDmax (ICC=0.916) and OBLA (ICC=0.885). Due to the higher accuracy of the lactate minimum test to determine maximal lactate steady state compared to OBLA and mDmax, we suggest the lactate minimum test as a valid and meaningful concept to estimate running velocity at maximal lactate steady state in a single session for moderately up to well-trained athletes. © Georg Thieme Verlag KG Stuttgart · New York.
Kuhn, Pierre; Zores, Claire; Pebayle, Thierry; Hoeft, Alain; Langlet, Claire; Escande, Benoît; Astruc, Dominique; Dufour, André
2012-04-01
Very early preterm infants (VPIs) are exposed to unpredictable noise in neonatal intensive care units. Their ability to perceive moderate acoustic environmental changes has not been fully investigated. Physiological values of the 598 isolated sound peaks (SPs) that were 5-10 and 10-15 dB slow-response A (dBA) above background noise levels and that occurred during infants' sleep varied significantly, indicating that VPIs detect them. Exposure to 10-15 dBA SPs during active sleep significantly increased mean heart rate and decreased mean respiratory rate and mean systemic and cerebral oxygen saturations relative to baseline. VPIs are sensitive to changes in their nosocomial acoustic environment, with a minimal signal-to-noise ratio (SNR) threshold of 5-10 dBA. These acoustic changes can alter their well-being. In this observational study, we evaluated their differential auditory sensitivity to sound-pressure level (SPL) increments below 70-75 dBA equivalent continuous level in their incubators. Environmental (SPL and audio recording), physiological, cerebral, and behavioral data were prospectively collected over 10 h in 26 VPIs (GA 28 (26-31) wk). SPs emerging from background noise levels were identified and newborns' arousal states at the time of SPs were determined. Changes in parameters were compared over 5-s periods between baseline and the 40 s following the SPs depending on their SNR thresholds above background noise.
Influenza surveillance in Europe: establishing epidemic thresholds by the Moving Epidemic Method
Vega, Tomás; Lozano, Jose Eugenio; Meerhoff, Tamara; Snacken, René; Mott, Joshua; Ortiz de Lejarazu, Raul; Nunes, Baltazar
2012-01-01
Please cite this paper as: Vega et al. (2012) Influenza surveillance in Europe: establishing epidemic thresholds by the moving epidemic method. Influenza and Other Respiratory Viruses 7(4), 546–558. Background Timely influenza surveillance is important to monitor influenza epidemics. Objectives (i) To calculate the epidemic threshold for influenza‐like illness (ILI) and acute respiratory infections (ARI) in 19 countries, as well as the thresholds for different levels of intensity. (ii) To evaluate the performance of these thresholds. Methods The moving epidemic method (MEM) has been developed to determine the baseline influenza activity and an epidemic threshold. False alerts, detection lags and timeliness of the detection of epidemics were calculated. The performance was evaluated using a cross‐validation procedure. Results The overall sensitivity of the MEM threshold was 71·8% and the specificity was 95·5%. The median of the timeliness was 1 week (range: 0–4·5). Conclusions The method produced a robust and specific signal to detect influenza epidemics. The good balance between the sensitivity and specificity of the epidemic threshold to detect seasonal epidemics and avoid false alerts has advantages for public health purposes. This method may serve as standard to define the start of the annual influenza epidemic in countries in Europe. PMID:22897919
Scaling Laws for NanoFET Sensors
NASA Astrophysics Data System (ADS)
Wei, Qi-Huo; Zhou, Fu-Shan
2008-03-01
In this paper, we report our numerical studies of the scaling laws for nanoplate field-effect transistor (FET) sensors by simplifying the nanoplates as random resistor networks. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field-effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors. We propose to eliminate these detection thresholds by employing devices with very short source-drain distance and large width.
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Adams, John B.; Smith, Milton O.
1992-01-01
The conditions that affect the spectral detection of target materials at the subpixel scale are examined. Two levels of spectral mixture analysis for determining threshold detection limits of target materials in a spectral mixture are presented, the cases where the target is detected as: (1) a component of a spectral mixture (continuum threshold analysis) and (2) residuals (residual threshold analysis). The results of these two analyses are compared under various measurement conditions. The examples illustrate the general approach that can be used for evaluating the spectral detectability of terrestrial and planetary targets at the subpixel scale.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
NASA Astrophysics Data System (ADS)
Kim, Bong Kyu; Chung, Hwan Seok; Chang, Sun Hyok; Park, Sangjo
We propose and demonstrate a scheme enhancing the performance of optical access networks with Manchester coded downstream and re-modulated NRZ coded upstream. It is achieved by threshold level control of a limiting amplifier at a receiver, and the minimum sensitivity of upstream is significantly improved for the re-modulation scheme with 5Gb/s Manchester coded downstream and 2.488Gb/s NRZ upstream data rates.
Butterworth, Alice S; Robertson, Alan J; Ho, Mei-Fong; Gatton, Michelle L; McCarthy, James S; Trenholme, Katharine R
2011-04-18
Obtaining single parasite clones is required for many techniques in malaria research. Cloning by limiting dilution using microscopy-based assessment for parasite growth is an arduous and labor-intensive process. An alternative method for the detection of parasite growth in limiting dilution assays is using a commercial ELISA histidine-rich protein II (HRP2) detection kit. Detection of parasite growth was undertaken using HRP2 ELISA and compared to thick film microscopy. An HRP2 protein standard was used to determine the detection threshold of the HRP2 ELISA assay, and a HRP2 release model was used to extrapolate the amount of parasite growth required for a positive result. The HRP2 ELISA was more sensitive than microscopy for detecting parasite growth. The minimum level of HRP2 protein detection of the ELISA was 0.11 ng/ml. Modeling of HRP2 release determined that 2,116 parasites are required to complete a full erythrocytic cycle to produce sufficient HRP2 to be detected by the ELISA. Under standard culture conditions this number of parasites is likely to be reached between 8 to 14 days of culture. This method provides an accurate and simple way for the detection of parasite growth in limiting dilution assays, reducing time and resources required in traditional methods. Furthermore the method uses spent culture media instead of the parasite-infected red blood cells, enabling culture to continue. © 2011 Butterworth et al; licensee BioMed Central Ltd.
Image quality of a pixellated GaAs X-ray detector
NASA Astrophysics Data System (ADS)
Sun, G. C.; Makham, S.; Bourgoin, J. C.; Mauger, A.
2007-02-01
X-ray detection requires materials with large atomic numbers Z in order to absorb the radiation efficiently. In case of X-ray imaging, fluorescence is a limiting factor for the spatial resolution and contrast at energies above the kα threshold. Since both the energy and yield of the fluorescence of a given material increase with the atomic number, there is an optimum value of Z. GaAs, which can now be epitaxially grown as self-supported thick layers to fulfil the requirements for imaging (good homogeneity of the electronic properties) corresponds to this optimum. Image performances obtained with this material are evaluated in terms of line spread function and modulation transfer function, and a comparison with CsI is made. We evaluate the image contrast obtained for a given object contrast with GaAs and CsI detectors, in the photon energy range of medical applications. Finally, we discuss the minimum object size, which can be detected by these detectors in of mammography conditions. This demonstrates that an object of a given size can be detected using a GaAs detector with a dose at least 100 times lower than using a CsI detector.
545 N McClurg Ct, February 2015, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,700 cpm to a maximum of 13,500cpm unshielded.
356 E Grand, July 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 5,000 cpm to amaximum of 9,600 cpm unshielded.
237 E Ontario, July 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 8,600 cpm to amaximum of 9,500 cpm unshielded.
226-228 E Ontario St, December 2014, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theunshielded instrument threshold previously stated and ranged from a minimum of 6,200 cpm to amaximum of 8,800 cpm unshielded.
441 E Ontario, August 2010, Lindsay Light Radiological Survey
The field gammameasurements within the excavation and the spoil materials generated during the excavation process did notexceed the respective threshold values previously stated and ranged from a minimum of 5,200 cpm to amaximum of 9,200 cpm.
380 E. North Water St, April 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,800 cpm to a maximum of 12,100cpm unshielded.
482-490 N. McClurg, March 2017, Lindsay Light Radiological Survey
field gamma measurements during the excavation process did not exceed the field instrument threshold for removal action level and ranged from a minimum of 4,000 cpm to to a maximum of 12,600 cpmunshielded.
A threshold-based fixed predictor for JPEG-LS image compression
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
Microdisk Injection Lasers for the 1.27-μm Spectral Range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kryzhanovskaya, N. V.; Maximov, M. V.; Blokhin, S. A.
2016-03-15
Microdisk injection lasers on GaAs substrates, with a minimum diameter of 15 μm and an active region based on InAs/InGaAs quantum dots, are fabricated. The lasers operate in the continuous-wave mode at room temperature without external cooling. The lasing wavelength is around 1.27 μm at a minimum threshold current of 1.6 mA. The specific thermal resistance is estimated to be 5 × 10–3 °C cm{sup 2}/W.
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
How Many Is Enough?—Statistical Principles for Lexicostatistics
Zhang, Menghan; Gong, Tao
2016-01-01
Lexicostatistics has been applied in linguistics to inform phylogenetic relations among languages. There are two important yet not well-studied parameters in this approach: the conventional size of vocabulary list to collect potentially true cognates and the minimum matching instances required to confirm a recurrent sound correspondence. Here, we derive two statistical principles from stochastic theorems to quantify these parameters. These principles validate the practice of using the Swadesh 100- and 200-word lists to indicate degree of relatedness between languages, and enable a frequency-based, dynamic threshold to detect recurrent sound correspondences. Using statistical tests, we further evaluate the generality of the Swadesh 100-word list compared to the Swadesh 200-word list and other 100-word lists sampled randomly from the Swadesh 200-word list. All these provide mathematical support for applying lexicostatistics in historical and comparative linguistics. PMID:28018261
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akparov, V V; Dmitriev, Valentin G; Duraev, V P
A semiconductor ring laser (SRL) with a radiation wavelength of 1540 nm and a fibre ring cavity is developed and studied in several main lasing regimes. An SRL design based on a semiconductor optical travelling-wave amplifier and a ring cavity, composed of a single-mode polarisation-maintaining fibre, is considered. The SRL is studied in the regime of a rotation speed sensor, in which the frequency shift of counterpropagating waves in the SRL is proportional to its rotation speed. The minimum rotation speed that can be detected using the SRL under consideration depends on the cavity length; in our experiment it turnedmore » to be 1deg s{sup -1}. The changes in the threshold current, emission spectrum, and fundamental radiation wavelength upon closing and opening the SRL ring cavity and with a change in its radius are also investigated. (lasers)« less
NASA Technical Reports Server (NTRS)
Musick, H. Brad; Truman, C. Randall; Trujillo, Steven M.
1992-01-01
Wind erosion in semi-arid regions is a significant problem for which the sheltering effect of rangeland vegetation is poorly understood. Individual plants may be considered as porous roughness elements which absorb or redistribute the wind's momentum. The saltation threshold is the minimum wind velocity at which soil movement begins. The dependence of the saltation threshold on geometrical parameters of a uniform roughness array was studied in a wind tunnel. Both solid and porous elements were used to determine relationships between canopy structure and the threshold velocity for soil transport. The development of a predictive relation for the influence of vegetation canopy structure on wind erosion of soil is discussed.
USDA-ARS?s Scientific Manuscript database
Thresholding is an important step in the segmentation of image features, and the existing methods are not all effective when the image histogram exhibits a unimodal pattern, which is common in defect detection of fruit. This study was aimed at developing a general automatic thresholding methodology ...
Adaptive local thresholding for robust nucleus segmentation utilizing shape priors
NASA Astrophysics Data System (ADS)
Wang, Xiuzhong; Srinivas, Chukka
2016-03-01
This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.
NASA Astrophysics Data System (ADS)
Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina
2014-05-01
Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then, to assess the most suitable threshold, we firstly analyzed hourly MEB values to detect if ablation occurs and how long this phenomenon takes (number of hours per day). The largest part of the melting (97.7%) resulted occurring on days featuring at least 6 melting hours thus suggesting to consider their minimum average daily temperature value as a suitable threshold (268.1 K). Then we ran a simple T-index model applying different threshold values. The threshold which better reproduces snow melting results the value 268.1 K. Summarizing using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the best reconstruction of glacier melt and it results in agreement with findings by van den Broeke et al. (2010) in Greenland ice sheet. Then probably the choice of a 268 K value as threshold for computing degree days amount could be generalized and applied not only on Greenland glaciers but also on Mid latitude and Alpine ones. This work was carried out under the umbrella of the SHARE Stelvio Project funded by the Lombardy Region and managed by FLA and EvK2-CNR Committee.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms.
Spierer, Oriel; Felix, Elizabeth R; McClellan, Allison L; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J; Sarantopoulos, Constantine D; Levitt, Roy C; Ehrmann, Klaus; Galor, Anat
2016-02-01
To examine associations between corneal mechanical thresholds and metrics of dry eye. This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = -0.13 to -0.27, P < 0.05 for values between -0.18 and -0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = -0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms
Spierer, Oriel; Felix, Elizabeth R.; McClellan, Allison L.; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J.; Sarantopoulos, Constantine D.; Levitt, Roy C.; Ehrmann, Klaus; Galor, Anat
2016-01-01
Purpose To examine associations between corneal mechanical thresholds and metrics of dry eye. Methods This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. Results A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = −0.13 to −0.27, P < 0.05 for values between −0.18 and −0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = −0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Conclusions Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints. PMID:26886896
A novel method for surface defect inspection of optic cable with short-wave infrared illuminance
NASA Astrophysics Data System (ADS)
Chen, Xiaohong; Liu, Ning; You, Bo; Xiao, Bin
2016-07-01
Intelligent on-line detection of cable quality is a crucial issue in optic cable factory, and defects on the surface of optic cable can dramatically depress cable grade. Manual inspection in optic cable quality cannot catch up with the development of optic cable industry due to its low detection efficiency and huge human cost. Therefore, real-time is highly demanded by industry in order to replace the subjective and repetitive process of manual inspection. For this reason, automatic cable defect inspection has been a trend. In this paper, a novel method for surface defect inspection of optic cable with short-wave infrared illuminance is presented. The special condition of short-wave infrared cannot only provide illumination compensation for the weak illumination environment, but also can avoid the problem of exposure when using visible light illuminance, which affects the accuracy of inspection algorithm. A series of image processing algorithms are set up to analyze cable image for the verification of real-time and veracity of the detection method. Unlike some existing detection algorithms which concentrate on the characteristics of defects with an active search way, the proposed method removes the non-defective areas of the image passively at the same time of image processing, which reduces a large amount of computation. OTSU algorithm is used to convert the gray image to the binary image. Furthermore, a threshold window is designed to eliminate the fake defects, and the threshold represents the considered minimum size of defects ε . Besides, a new regional suppression method is proposed to deal with the edge burrs of the cable, which shows the superior performance compared with that of Open-Close operation of mathematical morphological in the boundary processing. Experimental results of 10,000 samples show that the rates of miss detection and false detection are 2.35% and 0.78% respectively when ε equals to 0.5 mm, and the average processing period of one frame image is 2.39 ms. All the improvements have been verified in the paper to show the ability of our inspection method for optic cable.
Mori, Shuji; Oyama, Kazuki; Kikuchi, Yousuke; Mitsudo, Takako; Hirose, Nobuyuki
2015-01-01
The objective of this study was to examine the hypothesis that between-channel gap detection, which includes between-frequency and between-ear gap detection, and perception of stop consonants, which is mediated by the length of voice-onset time (VOT), share common mechanisms, namely relative-timing operation in monitoring separate perceptual channels. The authors measured gap detection thresholds and identification functions of /ba/ and /pa/ along VOT in 49 native young adult Japanese listeners. There were three gap detection tasks. In the between-frequency task, the leading and trailing markers differed in terms of center frequency (Fc). The leading marker was a broadband noise of 10 to 20,000 Hz. The trailing marker was a 0.5-octave band-passed noise of 1000-, 2000-, 4000-, or 8000-Hz Fc. In the between-ear task, the two markers were spectrally identical but presented to separate ears. In the within-frequency task, the two spectrally identical markers were presented to the same ear. The /ba/-/pa/ identification functions were obtained in a task in which the listeners were presented synthesized speech stimuli of varying VOTs from 10 to 46 msec and asked to identify them as /ba/ or /pa/. The between-ear gap thresholds were significantly positively correlated with the between-frequency gap thresholds (except those obtained with the trailing marker of 4000-Hz Fc). The between-ear gap thresholds were not significantly correlated with the within-frequency gap thresholds, which were significantly correlated with all the between-frequency gap thresholds. The VOT boundaries and slopes of /ba/-/pa/ identification functions were not significantly correlated with any of these gap thresholds. There was a close relation between the between-ear and between-frequency gap detection, supporting the view that these two types of gap detection share common mechanisms of between-channel gap detection. However, there was no evidence for a relation between the perception of stop consonants and the between-frequency/ear gap detection in native Japanese speakers.
2013-01-01
Background Intraoperative detection of 18F-FDG-avid tissue sites during 18F-FDG-directed surgery can be very challenging when utilizing gamma detection probes that rely on a fixed target-to-background (T/B) ratio (ratiometric threshold) for determination of probe positivity. The purpose of our study was to evaluate the counting efficiency and the success rate of in situ intraoperative detection of 18F-FDG-avid tissue sites (using the three-sigma statistical threshold criteria method and the ratiometric threshold criteria method) for three different gamma detection probe systems. Methods Of 58 patients undergoing 18F-FDG-directed surgery for known or suspected malignancy using gamma detection probes, we identified nine 18F-FDG-avid tissue sites (from amongst seven patients) that were seen on same-day preoperative diagnostic PET/CT imaging, and for which each 18F-FDG-avid tissue site underwent attempted in situ intraoperative detection concurrently using three gamma detection probe systems (K-alpha probe, and two commercially-available PET-probe systems), and then were subsequently surgical excised. Results The mean relative probe counting efficiency ratio was 6.9 (± 4.4, range 2.2–15.4) for the K-alpha probe, as compared to 1.5 (± 0.3, range 1.0–2.1) and 1.0 (± 0, range 1.0–1.0), respectively, for two commercially-available PET-probe systems (P < 0.001). Successful in situ intraoperative detection of 18F-FDG-avid tissue sites was more frequently accomplished with each of the three gamma detection probes tested by using the three-sigma statistical threshold criteria method than by using the ratiometric threshold criteria method, specifically with the three-sigma statistical threshold criteria method being significantly better than the ratiometric threshold criteria method for determining probe positivity for the K-alpha probe (P = 0.05). Conclusions Our results suggest that the improved probe counting efficiency of the K-alpha probe design used in conjunction with the three-sigma statistical threshold criteria method can allow for improved detection of 18F-FDG-avid tissue sites when a low in situ T/B ratio is encountered. PMID:23496877
Odor Detection Thresholds in a Population of Older Adults
Schubert, Carla R.; Fischer, Mary E.; Pinto, A. Alex; Klein, Barbara E.K.; Klein, Ronald; Cruickshanks, Karen J.
2016-01-01
OBJECTIVE To measure odor detection thresholds and associated nasal and behavioral factors in an older adult population. STUDY DESIGN Cross-sectional cohort study METHODS Odor detection thresholds were obtained using an automated olfactometer on 832 participants, aged 68–99 (mean age 77) years in the 21-year (2013–2016) follow-up visit of the Epidemiology of Hearing Loss Study. RESULTS The mean odor detection threshold (ODT) score was 8.2 (range: 1–13; standard deviation = 2.54), corresponding to a n-butanol concentration of slightly less than 0.03%. Older participants were significantly more likely to have lower (worse) ODT scores than younger participants (p<0.001). There were no significant differences in mean ODT scores between men and women. Older age was significantly associated with worse performance in multivariable regression models and exercising at least once a week was associated with a reduced odds of having a low (≤5) ODT score. Cognitive impairment was also associated with poor performance while a history of allergies or a deviated septum were associated with better performance. CONCLUSION Odor detection threshold scores were worse in older age groups but similar between men and women in this large population of older adults. Regular exercise was associated with better odor detection thresholds adding to the evidence that decline in olfactory function with age may be partly preventable. PMID:28000220
Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.
2013-01-01
The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained evidence that once the intelligence threshold is met, personality factors become more predictive for creativity. On the contrary, no threshold was found for creative achievement, i.e. creative achievement benefits from higher intelligence even at fairly high levels of intellectual ability. PMID:23825884
Thermal sensitivity and cardiovascular reactivity to stress in healthy males.
Conde-Guzón, Pablo Antonio; Bartolomé-Albistegui, María Teresa; Quirós, Pilar; Cabestrero, Raúl
2011-11-01
This paper examines the association of cardiovascular reactivity with thermal thresholds (detection and unpleasantness). Heart period (HP), systolic (SBP) and diastolic (DBP) blood pressure of 42 health young males were recorded during a cardiovascular reactivity task (a videogame based upon Sidman's avoidance paradigm). Thermal sensitivity, assessing detection and unpleasantness thresholds with radiant heat in the forearm was also estimated for participants. Participants with differential scores in the cardiovascular variables from base line to task > or = P65 were considered as reactors and those how have differential scores < or = P35 were considered as non-reactors. Significant differences were observed between groups in the unpleasantness thresholds in blood pressure (BP) but not in HP. Reactors exhibited significant higher unpleasantness thresholds than non-reactors. No significant differences were obtained in detection thresholds between groups.
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
NASA Astrophysics Data System (ADS)
Miron, Isidro J.; Montero, Juan Carlos; Criado-Alvarez, Juan José; Linares, Cristina; Díaz, Julio
2012-01-01
Studies on temperature-mortality time trends especially address heat, so that any contribution on the subject of cold is necessarily of interest. This study describes the modification of the lagged effects of cold on mortality in Castile-La Mancha from 1975 to 2003, with the novelty of also approaching this aspect in terms of mortality trigger thresholds. Cross-correlation functions (CCFs) were thus established with 15 lags, after application of ARIMA models to the mortality data and minimum daily temperatures (from November to March), and the results for the periods 1975-1984, 1985-1994 and 1995-2003 were then compared. In addition, daily mortality residuals for the periods 1975-1989 and 1990-2003 were related to minimum temperatures grouped in 2°C intervals, with a cold threshold temperature being obtained in cases where such residuals increased significantly ( p < 0.05) with respect to the mean for the study period. A cold-related mortality trigger threshold of -3°C was obtained for Ciudad Real for the period 1990-2003. The significant number of lags ( p < 0.05) in the CCFs declined every 10 years in Toledo (5-2-0), Cuenca (4-2-0), Albacete (4-3-0) and Ciudad Real (3-2-1). This meant that, while the trend in cold-related mortality trigger thresholds in the region could not be ascertained, it was possible to establish a reduction in the lagged effects of cold on mortality, attributable to the improvement in socio-economic conditions over the study period. Evidence was shown of the effects of cold on mortality, a finding that renders the adoption of preventive measures advisable in any case where intense cold is forecast.
Dependence of cavitation, chemical effect, and mechanical effect thresholds on ultrasonic frequency.
Thanh Nguyen, Tam; Asakura, Yoshiyuki; Koda, Shinobu; Yasuda, Keiji
2017-11-01
Cavitation, chemical effect, and mechanical effect thresholds were investigated in wide frequency ranges from 22 to 4880kHz. Each threshold was measured in terms of sound pressure at fundamental frequency. Broadband noise emitted from acoustic cavitation bubbles was detected by a hydrophone to determine the cavitation threshold. Potassium iodide oxidation caused by acoustic cavitation was used to quantify the chemical effect threshold. The ultrasonic erosion of aluminum foil was conducted to estimate the mechanical effect threshold. The cavitation, chemical effect, and mechanical effect thresholds increased with increasing frequency. The chemical effect threshold was close to the cavitation threshold for all frequencies. At low frequency below 98kHz, the mechanical effect threshold was nearly equal to the cavitation threshold. However, the mechanical effect threshold was greatly higher than the cavitation threshold at high frequency. In addition, the thresholds of the second harmonic and the first ultraharmonic signals were measured to detect bubble occurrence. The threshold of the second harmonic approximated to the cavitation threshold below 1000kHz. On the other hand, the threshold of the first ultraharmonic was higher than the cavitation threshold below 98kHz and near to the cavitation threshold at high frequency. Copyright © 2017 Elsevier B.V. All rights reserved.
Universal phase transition in community detectability under a stochastic block model.
Chen, Pin-Yu; Hero, Alfred O
2015-03-01
We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.
450 N. Cityfront Plaza, October 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,400 cpm to amaximum of 6,000 cpm unshielded.
465 E. Illinois St., February 2015, Lindsay Light Radiological Survey
Field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the field instrument threshold previously stated and rangedfrom a minimum of 10,000 cpm to a maximum of 15,500 cpm unshielded.
465 N. Park Ave, October 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,100 cpm to amaximum of 16,600 cpm unshielded.
300 E Randolph, May 2011, Lindsay Light Radiological Survey
The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective threshold valuespreviously stated and ranged from a minimum of 4,900 cpm to a maximum of 9,300 cpm.
151 N. Field Blvd., August 2010, Lindsay Light Radiological Survey
The field gamma measurements within theexcavation and the spoil materials generated during the excavation process did not exceed therespective threshold values previously stated and ranged from a minimum of 2,400 cpm to amaximum of 3,000 cpm.
400 E. Monroe, January 2011, Lindsay Light Radiological Survey
The fieldgamma measurements within the excavation and the spoil materials generated during the excavationprocess did not exceed the respective threshold values previously stated and ranged from a minimum of4,867 cpm to a maximum of 7,351 cpm.
165 E Ontario, October 2010, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the respective threshold values previously stated andranged from a minimum of 2, 150 cpm to a maximum of 6,270 cpm.
659-663 N Michigan, March 2011, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and the spoilmaterials generated during the excavation process did not exceed the respective threshold valuepreviously stated and ranged from a minimum of 4,180 cpm to a maximum of 5,545 cpm.
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Doroshenko, J.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foster, J.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Gobbi, B.; Grim, G. P.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Lander, R.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Meier, D.; Mishina, M.; Moroni, L.; Oh, A.; Pan, L. S.; Pernicka, M.; Perera, L.; Pirollo, S.; Plano, R.; Procario, M.; Riester, J. L.; Roe, S.; Rott, C.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Wedenig, R.; Weilhammer, P.; White, C.; Zeuner, W.; Zoeller, M.
2001-06-01
Diamond based pixel detectors are a promising radiation-hard technology for use at the LHC. We present first results on a CMS diamond pixel sensor. With a threshold setting of 2000 electrons, an average pixel efficiency of 78% was obtained for normally incident minimum ionizing particles.
633 N. Michigan Ave, May 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,400 cpm to amaximum of 5,900 cpm unshielded.
157 - 165 E. Ohio, May 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 5,000 cpm to amaximum of 5,900 cpm unshielded.
512 N McClurg, August 2010, Lindsay Light Radiological Survey
The field gamma measurementswithin the excavation and the spoil materials generated during the excavation process did not exceed therespective threshold values previously stated and ranged from a minimum of 6,500 cpm to a maximum of9,500 cpm.
Hearing impairment related to age in Usher syndrome types 1B and 2A.
Wagenaar, M; van Aarem, A; Huygen, P; Pieke-Dahl, S; Kimberling, W; Cremers, C
1999-04-01
To evaluate hearing impairment in 2 common genetic subtypes of Usher syndrome, USH1B and USH2A. Cross-sectional analysis of hearing threshold related to age in patients with genotypes determined by linkage and mutation analysis. Otolaryngology department, university referral center. Nineteen patients with USH1B and 27 with USH2A were examined. All participants were living in the Netherlands and Belgium. Pure tone audiometry of the best ear at last visit. The patients with USH1B had residual hearing without age dependence, with minimum thresholds of 80, 95, and 120 dB at 0.25, 0.5, and 1 to 2 kHz, respectively. Mean thresholds of patients with USH2A were about 45 to 55 dB better than these minimum values. Distinctive audiographic features of patients with USH2A were maximum hearing thresholds of 70, 80, and 100 dB at 0.25, 0.5, and 1 kHz, respectively, only at younger than 40 years. Progression of hearing impairment in USH2A was 0.7 dB/y on average for 0.25 to 4 kHz and could not be explained by presbyacusis alone. The USH1B and USH2A can be easily distinguished by hearing impairment at younger than 40 years at the low frequencies. Hearing impairment in our patients with USH2A could be characterized as progressive.
Comparison of salt taste thresholds and salt usage behaviours between adults in Myanmar and Korea.
Cho, Hyungjin; Kim, So Mi; Jeong, Seong Su; Kim, Soon Bae
2016-12-01
Excessive oral salt intake can induce hypertension. According to previous studies, the prevalence of hypertension is higher in Myanmar than in Korea. We postulated that Myanmar adults had higher salt taste thresholds and eat much saltier food. This study aimed to compare salt taste thresholds and salt usage behaviour scores between adults in Myanmar and Korea. This cross-sectional study enrolled patients who visited volunteer medical service clinics at Ansung in Korea and Hlegu and Bago in Myanmar in August 2014. We measured the vital signs, heights, and weights of each patient and evaluated detection thresholds, recognition thresholds, and salt preferences. All patients underwent urinalysis and spot urine Na tests. Additionally, they each completed a salt usage behaviour questionnaire. A total of 131 patients were enrolled, including 64 Myanmarese patients and 67 Korean patients. Blood pressure was significantly higher in the Myanmarese than in the Koreans. Detection and recognition thresholds, salt preferences, and spot urine sodium and salt usage behaviour scores were also higher in the Myanmarese than in the Korean subjects. We calculated correlation coefficients between systolic blood pressure and parameters that were related to salt intake. The detection and recognition thresholds were significantly correlated with systolic blood pressure. All parameters related to salt intake, including detection and recognition thresholds, salt preference, salt usage behaviour scores and spot urine sodium concentrations, are significantly higher in Myanmarese than in Korean individuals.
The effects of visual scenes on roll and pitch thresholds in pilots versus nonpilots.
Otakeno, Shinji; Matthews, Roger S J; Folio, Les; Previc, Fred H; Lessard, Charles S
2002-02-01
Previous studies have indicated that, compared with nonpilots, pilots rely more on vision than "seat-of-the-pants" sensations when presented with visual-vestibular conflict. The objective of this study was to evaluate whether pilots and nonpilots differ in their thresholds for tilt perception while viewing visual scenes depicting simulated flight. This study was conducted in the Advanced Spatial Disorientation Demonstrator (ASDD) at Brooks AFB, TX. There were 14 subjects (7 pilots and 7 nonpilots) who recorded tilt detection thresholds in pitch and roll while exposed to sub-threshold movement in each axis. During each test run, subjects were presented with computer-generated visual scenes depicting accelerating forward flight by day or night, and a blank (control) condition. The only significant effect detected by an analysis of variance (ANOVA) was that all subjects were more sensitive to tilt in roll than in pitch [F (2,24) = 18.96, p < 0.001]. Overall, pilots had marginally higher tilt detection thresholds compared with nonpilots (p = 0.055), but the type of visual scene had no significant effect on thresholds. In this study, pilots did not demonstrate greater visual dominance over vestibular and proprioceptive cues than nonpilots, but appeared to have higher pitch and roll thresholds overall. The finding of significantly lower detection thresholds in the roll axis vs. the pitch axis was an incidental finding for both subject groups.
The dynamics of learning about a climate threshold
NASA Astrophysics Data System (ADS)
Keller, Klaus; McInerney, David
2008-02-01
Anthropogenic greenhouse gas emissions may trigger threshold responses of the climate system. One relevant example of such a potential threshold response is a shutdown of the North Atlantic meridional overturning circulation (MOC). Numerous studies have analyzed the problem of early MOC change detection (i.e., detection before the forcing has committed the system to a threshold response). Here we analyze the early MOC prediction problem. To this end, we virtually deploy an MOC observation system into a simple model that mimics potential future MOC responses and analyze the timing of confident detection and prediction. Our analysis suggests that a confident prediction of a potential threshold response can require century time scales, considerably longer that the time required for confident detection. The signal enabling early prediction of an approaching MOC threshold in our model study is associated with the rate at which the MOC intensity decreases for a given forcing. A faster MOC weakening implies a higher MOC sensitivity to forcing. An MOC sensitivity exceeding a critical level results in a threshold response. Determining whether an observed MOC trend in our model differs in a statistically significant way from an unforced scenario (the detection problem) imposes lower requirements on an observation system than the determination whether the MOC will shut down in the future (the prediction problem). As a result, the virtual observation systems designed in our model for early detection of MOC changes might well fail at the task of early and confident prediction. Transferring this conclusion to the real world requires a considerably refined MOC model, as well as a more complete consideration of relevant observational constraints.
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
Tan, Junjie; Kan, Naipeng; Wang, Wei; Ling, Jingyi; Qu, Guolong; Jin, Jing; Shao, Yu; Liu, Gang; Chen, Huipeng
2015-06-01
Detection of 2,4,6-trinitrotoluene (TNT) has been extensively studied since it is a common explosive filling for landmines, posing significant threats to the environment and human safety. The rapid advances in synthetic biology give new hope to detect such toxic and hazardous compounds in a more sensitive and safe way. Biosensor construction anticipates finding sensing elements able to detect TNT. As TNT can induce some physiological responses in E. coli, it may be useful to define the sensing elements from E. coli to detect TNT. An E. coli MG1655 genomic promoter library containing nearly 5,400 elements was constructed. Five elements, yadG, yqgC, aspC, recE, and topA, displayed high sensing specificity to TNT and its indicator compounds 1,3-DNB and 2,4-DNT. Based on this, a whole cell biosensor was constructed using E. coli, in which green fluorescent protein was positioned downstream of the five sensing elements via genetic fusion. The threshold value, detection time, EC200 value, and other aspects of five sensing elements were determined and the minimum responding concentration to TNT was 4.75 mg/L. According to the synthetic biology, the five sensing elements enriched the reservoir of TNT-sensing elements, and provided a more applicable toolkit to be applied in genetic routes and live systems of biosensors in future.
Detection of dilute sperm samples using photoacoustic flowmetry
NASA Astrophysics Data System (ADS)
Viator, J. A.; Sutovsky, P.; Weight, R. M.
2008-02-01
Detection of sperm cells in dilute samples may have application in forensic testing and diagnosis of male reproductive health. Due to the optically dense subcellular structures in sperm cells, irradiation by nanosecond laser pulses induces a photoacoustic response detectable using a custom flow cytometer. We determined the detection threshold of bull sperm using various concentrations, from 200 to 1,000,000 sperm cells per milliliter. Using a tunable laser system set to 450nm with a 5 ns pulse duration and 11-12 mJ/pulse, we obtained a detection threshold of 3 sperm cells. The flow rate was 4 ml/minute through the flow chamber. The acoustic sensor was a 100 μm PVDF film attached to the glass flow chamber. The acoustic signal was preamplified and sent to an oscilloscope. The threshold signal indicated a signal to noise ratio of approximately 6 to 1. Improved system design may decrease the threshold to single sperm cells.
Boyd, Paul J
2006-12-01
The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.
An integrative perspective of the anaerobic threshold.
Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo
2017-12-14
The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Goeritno, Arief; Rasiman, Syofyan
2017-06-01
Performance examination of the bulk oil circuit breaker that is influenced by its parameters at the Substation of Bogor Baru (the State Electricity Company = PLN) has been done. It is found that (1) dielectric strength of oil still qualifies as an insulating and cooling medium, because the average value of the measurement result is still above the minimum value allowed, where the minimum limit of 80 kV/2.5 cm or 32 kV/cm; (2) the simultaneity of the CB's contacts is still eligible, so that the BOCB can still be operated, because the difference of time between the highest and lowest values when the BOCB's contacts are opened/closed are less than (Δt<) 10 milliseconds (if meeting the PLN standards as recommended by Alsthom); and (3) the parameter of resistance according to the standards, where (i) the resistance of insulation has a value far above the allowed threshold, while the minimum standards are above 2,000 Mn (if meeting the ANSI standards) or on the value of 2,000 MΩ (if meeting PLN standards), (ii) the resistance of contacts has a value far above the allowed threshold, while the minimum standards are below 350 µΩ (if meeting ANSI standards) or on the value of 200 µΩ (if meeting PLN standards). The resistance of grounding is equal to the maximum limit specified, while the maximum standard is on the value of 0.5 Ω (if meeting PLN standard).
Reeves, Aaron; McKee, Martin; Mackenbach, Johan; Whitehead, Margaret; Stuckler, David
2017-05-01
Does increasing incomes improve health? In 1999, the UK government implemented minimum wage legislation, increasing hourly wages to at least £3.60. This policy experiment created intervention and control groups that can be used to assess the effects of increasing wages on health. Longitudinal data were taken from the British Household Panel Survey. We compared the health effects of higher wages on recipients of the minimum wage with otherwise similar persons who were likely unaffected because (1) their wages were between 100 and 110% of the eligibility threshold or (2) their firms did not increase wages to meet the threshold. We assessed the probability of mental ill health using the 12-item General Health Questionnaire. We also assessed changes in smoking, blood pressure, as well as hearing ability (control condition). The intervention group, whose wages rose above the minimum wage, experienced lower probability of mental ill health compared with both control group 1 and control group 2. This improvement represents 0.37 of a standard deviation, comparable with the effect of antidepressants (0.39 of a standard deviation) on depressive symptoms. The intervention group experienced no change in blood pressure, hearing ability, or smoking. Increasing wages significantly improves mental health by reducing financial strain in low-wage workers. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent
2005-01-01
Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
NASA Astrophysics Data System (ADS)
Hunt, M. J.; Nuttle, W. K.; Cosby, B. J.; Marshall, F. E.
2005-05-01
Establishing minimum flow requirements in aquatic ecosystems is one way to stipulate controls on water withdrawals in a watershed. The basis of the determination is to identify the amount of flow needed to sustain a threshold ecological function. To develop minimum flow criteria an understanding of ecological response in relation to flow is essential. Several steps are needed including: (1) identification of important resources and ecological functions, (2) compilation of available information, (3) determination of historical conditions, (4) establishment of technical relationships between inflow and resources, and (5) identification of numeric criteria that reflect the threshold at which resources are harmed. The process is interdisciplinary requiring the integration of hydrologic and ecologic principles with quantitative assessments. The tools used quantify the ecological response and key questions related to how the quantity of flow influences the ecosystem are examined by comparing minimum flow determination in two different aquatic systems in South Florida. Each system is characterized by substantial hydrologic alteration. The first, the Caloosahatchee River is a riverine system, located on the southwest coast of Florida. The second, the Everglades- Florida Bay ecotone, is a wetland mangrove ecosystem, located on the southern tip of the Florida peninsula. In both cases freshwater submerged aquatic vegetation (Vallisneria americana or Ruppia maritima), located in areas of the saltwater- freshwater interface has been identified as a basis for minimum flow criteria. The integration of field studies, laboratory studies, and literature review was required. From this information we developed ecological modeling tools to quantify and predict plant growth in response to varying environmental variables. Coupled with hydrologic modeling tools questions relating to the quantity and timing of flow and ecological consequences in relation to normal variability are addressed.
Effect of Endurance Training on The Lactate and Glucose Minimum Intensities
Junior, Pedro B.; de Andrade, Vitor L.; Campos, Eduardo Z.; Kalva-Filho, Carlos A.; Zagatto, Alessandro M.; de Araújo, Gustavo G.; Papoti, Marcelo
2018-01-01
Due to the controversy about the sensitive of lactate minimum intensity (LMI) to training and the need to develop other tool for aerobic fitness evaluation, the purpose of this study was to analyze the sensitivity of glucose minimum intensity (GMI) and LMI to endurance training. Eight trained male cyclists (21.4 ± 1.9 years, 67.6 ± 7.5 kg and 1.72 ± 0.10 m) were evaluated twice, before and after 12 weeks of training. GMI and LMI were calculated, respectively, by the lowest blood glucose and lactate values attained during an incremental test performed after a hyperlactemia induction, and VO2max was determined during standard incremental effort. The training was prescribed in three different zones and controlled by heart rate (HR). The training distribution was equivalent to 59.7%, 25.0% and 15.3% below, at and above anaerobic threshold HR respectively. The anaerobic threshold evaluated by GMI and LMI improvement 9.89 ± 4.35% and 10.28 ± 9.89 respectively, after training, but the VO2max 2.52 ± 1.81%. No differences were found between GMI and LMI in pre (218.2 ± 22.1 vs 215.0 ± 18.6 W) and post (240.6 ± 22.9 vs 237.5 ± 18.8 W) training situations. LMI and GMI were sensitive to 12-week aerobic training in cyclist; thus, both protocols can be used to assess aerobic adaptation, athletes diagnostic and prescribe training. Key points The lactate and glucose minimum intensities (GMI) can be used for monitoring training effects on cyclists Although both GMI and lactate minimum intensities are important index of aerobic fitness, they cannot be used to determine aerobic fitness. The polarized training was effective for improvements of maximal oxygen uptake on trained cyclists. PMID:29535585
Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan
2016-01-01
Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266
Flight Test Overview for UAS Integration in the NAS Project
NASA Technical Reports Server (NTRS)
Murphy, James R.; Hayes, Peggy S.; Kim, Sam K.; Bridges, Wayne; Marston, Michael
2016-01-01
The National Aeronautics and Space Administration is conducting a series of flight tests intended to support the reduction of barriers that prevent unmanned aircraft from flying without the required waivers from the Federal Aviation Administration. The most recent testing supported two separate test configurations. The first investigated the timing of Detect and Avoid (DAA) alerting thresholds using a radar-equipped unmanned vehicle and multiple live intruders flown at varying encounter geometries. The second configuration included a surrogate unmanned vehicle (flown from a ground control station, with a safety pilot on board) flying a mission in a virtual air traffic control airspace sector using research pilot displays and DAA advisories to maintain separation from live and virtual aircraft. The test was conducted over a seven-week span in the summer of 2015. The data from over 100 encounter sorties will be used to inform the RTCA Phase 1 Detect and Avoid and Command and Control Minimum Operating Performance Standards (MOPS) intended to be completed by the summer of 2016. Follow-on flight-testing is planned for the spring of 2016 to capture remaining encounters and support validation of the MOPS.
Sun glint requirement for the remote detection of surface oil films
NASA Astrophysics Data System (ADS)
Sun, Shaojie; Hu, Chuanmin
2016-01-01
Natural oil slicks in the western Gulf of Mexico are used to determine the sun glint threshold required for optical remote sensing of oil films. The threshold is determined using the same-day image pairs collected by Moderate Resolution Imaging Spectroradiometer (MODIS) Terra (MODIST), MODIS Aqua (MODISA), and Visible Infrared Imaging Radiometer Suite (VIIRS) (N = 2297 images) over the same oil slick locations where at least one of the sensors captures the oil slicks. For each sensor, statistics of sun glint strengths, represented by the normalized glint reflectance (LGN, sr-1), when oil slicks can and cannot be observed are generated. The LGN threshold for oil film detections is determined to be 10-5-10-6 sr-1 for MODIST and MODISA, and 10-6-10-7 sr-1 for VIIRS. Below these thresholds, no oil films can be detected, while above these thresholds, oil films can always be detected except near the critical-angle zone where oil slicks reverse their contrast against the background water.
Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas
Asquith, William H.; Roussel, Meghan C.
2003-01-01
A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.
How insurance affects altruistic provision in threshold public goods games.
Zhang, Jianlei; Zhang, Chunyan; Cao, Ming
2015-03-13
The occurrence and maintenance of cooperative behaviors in public goods systems have attracted great research attention across multiple disciplines. A threshold public goods game requires a minimum amount of contributions to be collected from a group of individuals for provision to occur. Here we extend the common binary-strategy combination of cooperation and defection by adding a third strategy, called insured cooperation, which corresponds to buying an insurance covering the potential loss resulted from the unsuccessful public goods game. Particularly, only the contributing agents can opt to be insured, which is an effort decreasing the amount of the potential loss occurring. Theoretical computations suggest that when agents face the potential aggregate risk in threshold public goods games, more contributions occur with increasing compensation from insurance. Moreover, permitting the adoption of insurance significantly enhances individual contributions and facilitates provision, especially when the required threshold is high. This work also relates the strategy competition outcomes to different allocation rules once the resulted contributions exceed the threshold point in populations nested within a dilemma.
A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.
Guédra, Matthieu; Cornu, Corentin; Inserra, Claude
2017-09-01
The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
501-59 E Illinois Street, January 2018, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process did not exceed the instruments threshold previously stated and ranged from a minimum of 1,000 cpm to amaximum of 4,000 cpm shielded.
200 E Illinois St, December 2011, Lindsay Light Radiological Survey
Field gamma measurements within the excavation and the spoil materials generated during the excavation process did not exceed the respective threshold value previously stated and ranged from a minimum of 7,100 cpm to amaximum of 9,400 cpm unshielded.
158 East Ontario, July 23, 2013, Lindsay Light Radiological Survey
Field gamma measurements within the excavation and the spoil materials generated during the excavationprocess did not exceed the respective threshold values previously stated and ranged from a minimum of6,700 cpm to a maximum of 8,700 cpm unshielded.
639-640 E. Ohio St, August 2014, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and of the spoil materials generatedduring the excavation process did not exceed the instrument threshold previously stated and ranged froma minimum of 5,100 cpm to a maximum of 6,700 cpm unshielded.
600-602 N Lakeshore Drive, March 2011, Lindsay Light Radiological Survey
The fieldgamma measurements within the excavation and the spoil materials generated during the excavationprocess did not exceed the respective threshold values previously stated and ranged from a minimum of4,382 cpm to a maximum of 7,907 cpm.
500-03 N. Peshtigo, October 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and predominantly ranged from a minimum of5,600 cpm to a maximum of 11,000 cpm unshielded.
301-40 N. Field, October 2016, Lindsay Light Radiological Survey
The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and predominantly ranged from a minimum of3,000 cpm to a maximum of 10,000 cpm unshielded.
Lindsay Light Radiological Survey 350 E Ohio St, May 2013
Field gamma measurements within the excavation and the spoil materials generated during the excavationprocess did not exceed the respective threshold values previously stated and ranged from a minimum of5,700 cpm to a maximum of 12,200 cpm unshielded.
405 E Illinois St, May 2014, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the field instrument threshold previously stated and rangedfrom a minimum of 6,000 cpm to a maximum of 8,050 cpm unshielded
200-210 E. Ohio St., May 2014, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the field instrument threshold previously stated and rangedfrom a minimum of 6,000 cpm to a maximum of 7,500 cpm unshielded
201 E South Water, December 2010, Lindsay Light Radiological Survey
The field gamma measurements within the excavation and the spoil materials generated during the excavation process did not exceed the respective threshold values previously stated and ranged from a minimum of7,234 cpm to a maximum of 9,872 cpm.
237 E. Ontario St., January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The measurements within the excavations and of the soil did not exceed the instrument USEPA threshold and ranged from a minimum of 4,800 cpm to a maximum of 8,300 cpm unshielded.
NASA Astrophysics Data System (ADS)
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.
Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S
2016-05-01
The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Pavlaković, G; Züchner, K; Zapf, A; Bachmann, C G; Graf, B M; Crozier, T A; Pavlaković, H
2009-08-01
Various factors can influence thermal perception threshold measurements and contribute significantly to unwanted variability of the tests. To minimize this variability, testing should be performed under strictly controlled conditions. Identifying the factors that increase the variability and eliminating their influence should increase reliability and reproducibility. Currently available thermotesting devices use a water-cooling system that generates a continuous noise of approximately 60 dB. In order to analyze whether this noise could influence the thermal threshold measurements we compared the thresholds obtained with a silent thermotesting device to those obtained with a commercially available device. The subjects were tested with one randomly chosen device on 1 day and with the other device 7 days later. At each session, heat, heat pain, cold, and cold pain thresholds were determined with three measurements. Bland-Altman analysis was used to assess agreement in measurements obtained with different devices and it was shown that the intersubject variability of the thresholds obtained with the two devices was comparable for all four thresholds tested. In contrast, the intrasubject variability of the thresholds for heat, heat pain, and cold pain detection was significantly lower with the silent device. Our results show that thermal sensory thresholds measured with the two devices are comparable. However, our data suggest that, for studies with repeated measurements on the same subjects, a silent thermotesting device may allow detection of smaller differences in the treatment effects and/or may permit the use of a smaller number of tested subjects. Muscle Nerve 40: 257-263, 2009.
Low authority-threshold control for large flexible structures
NASA Technical Reports Server (NTRS)
Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.
1988-01-01
An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.
Analysis of the instability underlying electrostatic suppression of the Leidenfrost state
NASA Astrophysics Data System (ADS)
Shahriari, Arjang; Das, Soumik; Bahadur, Vaibhav; Bonnecaze, Roger T.
2017-03-01
A liquid droplet on a hot solid can generate enough vapor to prevent its contact on the surface and reduce the rate of heat transfer, the so-called Leidenfrost effect. We show theoretically and experimentally that for a sufficiently high electrostatic potential on the droplet, the formation of the vapor layer is suppressed. The interplay of the destabilizing electrostatic force and stabilizing capillary force and evaporation determines the minimum or threshold voltage to suppress the Leidenfrost effect. Linear stability theory accurately predicts threshold voltages for different size droplets and varying temperatures.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Ye, Ying; Griffin, Michael J
2018-01-01
Thermotactile thresholds and vibrotactile thresholds are measured to assist the diagnosis of the sensorineural component of the hand-arm vibration syndrome (HAVS). This study investigates whether thermotactile and vibrotactile thresholds distinguish between fingers with and without numbness and tingling. In 60 males reporting symptoms of the hand-arm vibration syndrome, thermotactile thresholds for detecting hot and cold temperatures and vibrotactile thresholds at 31.5 and 125 Hz were measured on the index and little fingers of both hands. In fingers reported to suffer numbness or tingling, hot thresholds increased, cold thresholds decreased, and vibrotactile thresholds at both 31.5 and 125 Hz increased. With sensorineural symptoms on all three phalanges (i.e. numbness or tingling scores of 6), both thermotactile thresholds and both vibrotactile thresholds had sensitivities greater than 80% and specificities around 90%, with areas under the receiver operating characteristic curves around 0.9. There were correlations between all four thresholds, but cold thresholds had greater sensitivity and greater specificity on fingers with numbness or tingling on only the distal phalanx (i.e. numbness or tingling scores of 1) suggesting cold thresholds provide better indications of early sensorineural disorder. Thermotactile thresholds and vibrotactile thresholds can provide useful indications of sensorineural function in patients reporting symptoms of the sensorineural component of HAVS.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Dolan, C.R.; Miranda, L.E.; Henry, T.B.
2002-01-01
Continuous direct current (DC) and pulsed DC (PDC) of varying frequency and pulse period are commonly used to immobilize and collect crappies Pomoxis spp. in freshwater. However, little information is available about the minimum electrical-setting thresholds required for immobilization or how the settings relate to incidence of injury. We investigated the effect of increasing power densities on the immobilization and injury of black crappies P. nigromaculatus (average total length = 154 mm) treated with DC and various PDC settings. Forced swimming toward the electrodes was observed in black crappies exposed to DC, but that was less apparent for PDC. The minimum peak power densities required to immobilize black crappies ranged from 0.10 to 6.5 mW/cm3 and depended on pulse frequency and period. The incidence of hemorrhaging ranged from 0% to 50% and that of spinal damage from 9% to 45%. However, the severity of injury also depended on pulse frequency and period. No fish suffered mortality at or below the immobilization thresholds, but mortality ranged from 0% to 15% at settings above the thresholds. Mortality was observed with PDC settings of 15 Hz only. Fish that were tetanized following electrical treatment were more prone to injury than those that exhibited narcosis.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
An extensive investigation of work function modulated trapezoidal recessed channel MOSFET
NASA Astrophysics Data System (ADS)
Lenka, Annada Shankar; Mishra, Sikha; Mishra, Satyaranjan; Bhanja, Urmila; Mishra, Guru Prasad
2017-11-01
The concept of silicon on insulator (SOI) and grooved gate help to lessen the short channel effects (SCEs). Again the work function modulation along the metal gate gives a better drain current due to the uniform electric field along the channel. So all these concepts are combined and used in the proposed MOSFET structure for more improved performance. In this work, trapezoidal recessed channel silicon on insulator (TRC-SOI) MOSFET and work function modulated trapezoidal recessed channel silicon on insulator (WFM-TRC-SOI) MOSFET are compared with DC and RF parameters and later linearity of both the devices is tested. An analytical model is formulated by using a 2-D Poisson's equation and develops a compact equation for threshold voltage using minimum surface potential. In this work we analyze the effect of negative junction depth and the corner angle on various device parameters such as minimum surface potential, sub-threshold slope (SS), drain induced barrier lowering (DIBL) and threshold voltage. The analysis interprets that the switching performance of WFM-TRC-SOI MOSFET surpasses TRC-SOI MOSFET in terms of high Ion/Ioff ratio and also the proposed structure can minimize the short channel effects (SCEs) in RF application. The validity of proposed model has been verified with simulation result performed on Sentaurus TCAD device simulator.
Musical duplex perception: perception of figurally good chords with subliminal distinguishing tones.
Hall, M D; Pastore, R E
1992-08-01
In a variant of duplex perception with speech, phoneme perception is maintained when distinguishing components are presented below intensities required for separate detection, forming the basis for the claim that a phonetic module takes precedence over nonspeech processing. This finding is replicated with music chords (C major and minor) created by mixing a piano fifth with a sinusoidal distinguishing tone (E or E flat). Individual threshold intensities for detecting E or E flat in the context of the fixed piano tones are established. Chord discrimination thresholds defined by distinguishing tone intensity were determined. Experiment 2 verified masked detection thresholds and subliminal chord identification for experienced musicians. Accurate chord perception was maintained at distinguishing tone intensities nearly 20 dB below the threshold for separate detection. Speech and music findings are argued to demonstrate general perceptual principles.
The Neural Substrate for Binaural Masking Level Differences in the Auditory Cortex
Gilbert, Heather J.; Krumbholz, Katrin; Palmer, Alan R.
2015-01-01
The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. PMID:25568115
Haase, Anton; Soltwisch, Victor; Braun, Stefan; Laubis, Christian; Scholze, Frank
2017-06-26
We investigate the influence of the Mo-layer thickness on the EUV reflectance of Mo/Si mirrors with a set of unpolished and interface-polished Mo/Si/C multilayer mirrors. The Mo-layer thickness is varied in the range from 1.7 nm to 3.05 nm. We use a novel combination of specular and diffuse intensity measurements to determine the interface roughness throughout the multilayer stack and do not rely on scanning probe measurements at the surface only. The combination of EUV and X-ray reflectivity measurements and near-normal incidence EUV diffuse scattering allows to reconstruct the Mo layer thicknesses and to determine the interface roughness power spectral density. The data analysis is conducted by applying a matrix method for the specular reflection and the distorted-wave Born approximation for diffuse scattering. We introduce the Markov-chain Monte Carlo method into the field in order to determine the respective confidence intervals for all reconstructed parameters. We unambiguously detect a threshold thickness for Mo in both sample sets where the specular reflectance goes through a local minimum correlated with a distinct increase in diffuse scatter. We attribute that to the known appearance of an amorphous-to-crystallization transition at a certain thickness threshold which is altered in our sample system by the polishing.
Entanglement with negative Wigner function of three thousand atoms heralded by one photon
NASA Astrophysics Data System (ADS)
McConnell, Robert; Zhang, Hao; Hu, Jiazhong; Ćuk, Senka; Vuletić, Vladan
2016-06-01
Quantum-mechanically correlated (entangled) states of many particles are of interest in quantum information, quantum computing and quantum metrology. Metrologically useful entangled states of large atomic ensembles have been experimentally realized [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], but these states display Gaussian spin distribution functions with a non-negative Wigner function. Non-Gaussian entangled states have been produced in small ensembles of ions [11, 12], and very recently in large atomic ensembles [13, 14, 15]. Here, we generate entanglement in a large atomic ensemble via the interaction with a very weak laser pulse; remarkably, the detection of a single photon prepares several thousand atoms in an entangled state. We reconstruct a negative-valued Wigner function, an important hallmark of nonclassicality, and verify an entanglement depth (minimum number of mutually entangled atoms) of 2910 ± 190 out of 3100 atoms. Attaining such a negative Wigner function and the mutual entanglement of virtually all atoms is unprecedented for an ensemble containing more than a few particles. While the achieved purity of the state is slightly below the threshold for entanglement-induced metrological gain, further technical improvement should allow the generation of states that surpass this threshold, and of more complex Schrödinger cat states for quantum metrology and information processing.
Potential of solar-simulator-pumped alexandrite lasers
NASA Technical Reports Server (NTRS)
Deyoung, Russell J.
1990-01-01
An attempt was made to pump an alexandrite laser rod using a Tamarak solar simulator and also a tungsten-halogen lamp. A very low optical laser cavity was used to achieve the threshold minimum pumping-power requirement. Lasing was not achieved. The laser threshold optical-power requirement was calculated to be approximately 626 W/sq cm for a gain length of 7.6 cm, whereas the Tamarak simulator produces 1150 W/sq cm over a gain length of 3.3 cm, which is less than the 1442 W/sq cm required to reach laser threshold. The rod was optically pulsed with 200 msec pulses, which allowed the alexandrite rod to operate at near room temperature. The optical intensity-gain-length product to achieve laser threshold should be approximately 35,244 solar constants-cm. In the present setup, this product was 28,111 solar constants-cm.
Stress lowers the detection threshold for foul-smelling 2-mercaptoethanol.
Pacharra, Marlene; Schäper, Michael; Kleinbeck, Stefan; Blaszkewicz, Meinolf; Wolf, Oliver T; van Thriel, Christoph
2016-01-01
Previous studies have reported enhanced vigilance for threat-related information in response to acute stress. While it is known that acute stress modulates sensory systems in humans, its impact on olfaction and the olfactory detection of potential threats is less clear. Two psychophysical experiments examined, if acute stress lowers the detection threshold for foul-smelling 2-mercaptoethanol. Participants in Experiment 1 (N = 30) and Experiment 2 (N = 32) were randomly allocated to a control group or a stress group. Participants in the stress group underwent a purely psychosocial stressor (public mental arithmetic) in Experiment 1 and a stressor that combined a physically demanding task with social-evaluative threat in Experiment 2 (socially evaluated cold-pressor test). In both experiments, olfactory detection thresholds were repeatedly assessed by means of dynamic dilution olfactometry. Each threshold measurement consisted of three trials conducted using an ascending method of limits. Participants in the stress groups showed the expected changes in heart rate, salivary cortisol, and mood measures in response to stress. About 20 min after the stressor, participants in the stress groups could detect 2-mercaptoethanol at a lower concentration than participants in the corresponding control groups. Our results show that acute stress lowers the detection threshold for a malodor.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
Terhune, Devin B; Murray, Elizabeth; Near, Jamie; Stagg, Charlotte J; Cowey, Alan; Cohen Kadosh, Roi
2015-11-01
Phosphenes are illusory visual percepts produced by the application of transcranial magnetic stimulation to occipital cortex. Phosphene thresholds, the minimum stimulation intensity required to reliably produce phosphenes, are widely used as an index of cortical excitability. However, the neural basis of phosphene thresholds and their relationship to individual differences in visual cognition are poorly understood. Here, we investigated the neurochemical basis of phosphene perception by measuring basal GABA and glutamate levels in primary visual cortex using magnetic resonance spectroscopy. We further examined whether phosphene thresholds would relate to the visuospatial phenomenology of grapheme-color synesthesia, a condition characterized by atypical binding and involuntary color photisms. Phosphene thresholds negatively correlated with glutamate concentrations in visual cortex, with lower thresholds associated with elevated glutamate. This relationship was robust, present in both controls and synesthetes, and exhibited neurochemical, topographic, and threshold specificity. Projector synesthetes, who experience color photisms as spatially colocalized with inducing graphemes, displayed lower phosphene thresholds than associator synesthetes, who experience photisms as internal images, with both exhibiting lower thresholds than controls. These results suggest that phosphene perception is driven by interindividual variation in glutamatergic activity in primary visual cortex and relates to cortical processes underlying individual differences in visuospatial awareness. © The Author 2015. Published by Oxford University Press.
A threshold-based approach for muscle contraction detection from surface EMG signals
NASA Astrophysics Data System (ADS)
Morantes, Gaudi; Fernández, Gerardo; Altuve, Miguel
2013-11-01
Surface electromyographic (SEMG) signals are commonly used as control signals in prosthetic and orthotic devices. Super cial electrodes are placed on the skin of the subject to acquire its muscular activity through this signal. The muscle contraction episode is then in charge of activating and deactivating these devices. Nevertheless, there is no gold standard" to detect muscle contraction, leading to delayed responses and false and missed detections. This fact motivated us to propose a new approach that compares a smoothed version of the SEMG signal with a xed threshold, in order to detect muscle contraction episodes. After preprocessing the SEMG signal, the smoothed version is obtained using a moving average lter, where three di erent window lengths has been evaluated. The detector was tuned by maximizing sensitivity and speci city and evaluated using SEMG signals obtained from the anterior tibial and gastrocnemius muscles, taken during the walking of ve subjects. Compared with traditional detection methods, we obtain a reduction of 3 ms in the detection delay, an increase of 8% in sensitivity but a decrease of 15% in speci city. Future work is directed to the inclusion of a temporal threshold (a double-threshold approach) to minimize false detections and reduce detection delays.
Anderson, Elizabeth S; Oxenham, Andrew J; Nelson, Peggy B; Nelson, David A
2012-12-01
Measures of spectral ripple resolution have become widely used psychophysical tools for assessing spectral resolution in cochlear-implant (CI) listeners. The objective of this study was to compare spectral ripple discrimination and detection in the same group of CI listeners. Ripple detection thresholds were measured over a range of ripple frequencies and were compared to spectral ripple discrimination thresholds previously obtained from the same CI listeners. The data showed that performance on the two measures was correlated, but that individual subjects' thresholds (at a constant spectral modulation depth) for the two tasks were not equivalent. In addition, spectral ripple detection was often found to be possible at higher rates than expected based on the available spectral cues, making it likely that temporal-envelope cues played a role at higher ripple rates. Finally, spectral ripple detection thresholds were compared to previously obtained speech-perception measures. Results confirmed earlier reports of a robust relationship between detection of widely spaced ripples and measures of speech recognition. In contrast, intensity difference limens for broadband noise did not correlate with spectral ripple detection measures, suggesting a dissociation between the ability to detect small changes in intensity across frequency and across time.
Kriska, György; Bernáth, Balázs; Farkas, Róbert; Horváth, Gábor
2009-12-01
With few exceptions insects whose larvae develop in freshwater possess positive polarotaxis, i.e., are attracted to sources of horizontally polarized light, because they detect water by means of the horizontal polarization of light reflected from the water surface. These insects can be deceived by artificial surfaces (e.g. oil lakes, asphalt roads, black plastic sheets, dark-coloured cars, black gravestones, dark glass surfaces, solar panels) reflecting highly and horizontally polarized light. Apart from the surface characteristics, the extent of such a 'polarized light pollution' depends on the illumination conditions, direction of view, and the threshold p* of polarization sensitivity of a given aquatic insect species. p* means the minimum degree of linear polarization p of reflected light that can elicit positive polarotaxis from a given insect species. Earlier there were no quantitative data on p* in aquatic insects. The aim of this work is to provide such data. Using imaging polarimetry in the red, green and blue parts of the spectrum, in multiple-choice field experiments we measured the threshold p* of ventral polarization sensitivity in mayflies, dragonflies and tabanid flies, the positive polarotaxis of which has been shown earlier. In the blue (450nm) spectral range, for example, we obtained the following thresholds: dragonflies: Enallagma cyathigerum (0%
50 CFR 600.315 - National Standard 2-Scientific Information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... sources to improve understanding and management of the resource, marine ecosystem, and the fishery... any stock or stock complex is approaching the minimum stock size threshold. (ii) Any management..., social, and ecological information pertinent to the success of management or the achievement of...
50 CFR 600.315 - National Standard 2-Scientific Information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... sources to improve understanding and management of the resource, marine ecosystem, and the fishery... any stock or stock complex is approaching the minimum stock size threshold. (ii) Any management..., social, and ecological information pertinent to the success of management or the achievement of...
Multiple addresses: 201 S. LSD and 401 N. LSD, Lindsay Light Radiological Survey
The field gamma measurements beneath the removed pavement and within the excavated areas of theproject did not exceed the instrument threshold previously stated, and ranged from a minimum of 700cpm to a maximum of 4,500 cpm shielded.
615-640 N. Michigan Ave. and 101-155 E. Ontario, January 2018, Lindsay Light Radiological Survey
The field gamma measurements for the spoil and within the remaining excavations did not exceed the instrument threshold previously stated, and ranged from a minimum of 1,400 cpm to a maximum of 4,000 cpm shielded.
The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,200 cpm to a maximum of 3,100cpm shielded.
NASA Astrophysics Data System (ADS)
Aini, S.; Nizar, U. K.; NST, A. Amelia; Efendi, J.
2018-04-01
This research is on identification and purification of silica sand from Nyalo River. It will be used as a raw material for synthesis of sodium silicate. Silica sand was separated from clay by washing it with water, and then the existing alumina and iron oxide were removed by soaking the silica sand with 1 M HNO3 solution. Qualitative and quantitative analysis of the silica sand with X-ray diffraction and X-ray fluorescence revealed that, silica sand existed in quartz form and contained a small amount of impurity oxide such as Al2O3, K2O, MgO, CaO, Fe2O3 with percentage below the minimum threshold. The percentages of silica were 80.59% before purification. After three purificationsteps the silica percentage become 98.38%. It exceedsthe minimum threshold of silica percentage for industry.So, the silica sand from Nyalo River has high potency as a raw material for sodium silicate synthesizing.
Mantle, Jennifer L; Min, Lie; Lee, Kelvin H
2016-12-05
A human cell-based in vitro model that can accurately predict drug penetration into the brain as well as metrics to assess these in vitro models are valuable for the development of new therapeutics. Here, human induced pluripotent stem cells (hPSCs) are differentiated into a polarized monolayer that express blood-brain barrier (BBB)-specific proteins and have transendothelial electrical resistance (TEER) values greater than 2500 Ω·cm 2 . By assessing the permeabilities of several known drugs, a benchmarking system to evaluate brain permeability of drugs was established. Furthermore, relationships between TEER and permeability to both small and large molecules were established, demonstrating that different minimum TEER thresholds must be achieved to study the brain transport of these two classes of drugs. This work demonstrates that this hPSC-derived BBB model exhibits an in vivo-like phenotype, and the benchmarks established here are useful for assessing functionality of other in vitro BBB models.
Approximate Solutions for a Self-Folding Problem of Carbon Nanotubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y Mikata
2006-08-22
This paper treats approximate solutions for a self-folding problem of carbon nanotubes. It has been observed in the molecular dynamics calculations [1] that a carbon nanotube with a large aspect ratio can self-fold due to van der Waals force between the parts of the same carbon nanotube. The main issue in the self-folding problem is to determine the minimum threshold length of the carbon nanotube at which it becomes possible for the carbon nanotube to self-fold due to the van der Waals force. An approximate mathematical model based on the force method is constructed for the self-folding problem of carbonmore » nanotubes, and it is solved exactly as an elastica problem using elliptic functions. Additionally, three other mathematical models are constructed based on the energy method. As a particular example, the lower and upper estimates for the critical threshold (minimum) length are determined based on both methods for the (5,5) armchair carbon nanotube.« less
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
Reliability of the individual components of the Canadian Armed Forces Physical Employment Standard.
Stockbrugger, Barry G; Reilly, Tara J; Blacklock, Rachel E; Gagnon, Patrick J
2018-01-29
This investigation recruited 24 participants from both the Canadian Armed Forces (CAF) and civilian populations to complete 4 separate trials at "best effort" of each of the 4 components in the CAF Physical Employment Standard named the FORCE Evaluation: Fitness for Operational Requirements of CAF Employment. Analyses were performed to examine the level of variability and reliability within each component. The results demonstrate that candidates should be provided with at least 1 retest if they have recently completed at least 2 previous best effort attempts as per the protocol. In addition, the minimal detectable difference is given for each of the 4 components in seconds which identifies the threshold for subsequent action, either retest or remedial training, for those unable to meet the minimum standard. These results will educate the delivery of this employment standard, function as a method of accommodation, in addition to providing direction for physical training programs.
A pilot analytic study of a research-level, lower-cost human papillomavirus 16, 18, and 45 test.
Yang, Hannah P; Walmer, David K; Merisier, Delson; Gage, Julia C; Bell, Laura; Rangwala, Sameera; Shrestha, Niwashin; Kobayashi, Lori; Eder, Paul S; Castle, Philip E
2011-09-01
The analytic performance of a low-cost, research-stage DNA test for the most carcinogenic human papillomavirus (HPV) genotypes (HPV16, HPV18, and HPV45) in aggregate was evaluated among carcinogenic HPV-positive women, which might be used to decide who needs immediate colposcopy in low-resource settings ("triage test"). We found that HPV16/18/45 test agreed well with two DNA tests, a GP5+/6+ genotyping assay (Kappa = 0.77) and a quantitative PCR assay (at a cutpoint of 5000 viral copies) (Kappa = 0.87). DNA sequencing on a subset of 16 HPV16/18/45 positive and 16 HPV16/18/45 negative verified the analytic specificity of the research test. It is concluded that the HPV16/18/45 assay is a promising triage test with a minimum detection of approximately 5000 viral copies, the clinically relevant threshold. Published by Elsevier B.V.
Reißhauer, A; Liebl, M E
2012-07-01
Standards for what should be available in terms of equipment and services in a department of physical medicine caring for acute inpatients do not exist in Germany. The profile of a department determines the therapeutic services it focuses on and hence the technical facilities required. The German catalogue of operations and procedures defines minimum thresholds for treatment. In the opinion of the authors a department caring for inpatients with acute rheumatic diseases must, as a minimum, have the facilities and equipment necessary for offering thermotherapeutic treatment. Staff trained in physical therapeutic procedures and occupational therapy is also crucial. Moreover, it is desirable that the staff should be trained in manual therapy.
Sugar Detection Threshold After Laparoscopic Sleeve Gastrectomy in Adolescents.
Abdeen, Ghalia N; Miras, Alexander D; Alqhatani, Aayed R; le Roux, Carel W
2018-05-01
Obesity in young people is one of the most serious public health problems worldwide. Moreover, the mechanisms preventing obese adolescents from losing and maintaining weight loss have been elusive. Laparoscopic sleeve gastrectomy (LSG) is successful at achieving long-term weight loss in patients across all age groups, including children and adolescents. Anecdotal clinical observation as well as evidence in rodents suggests that LSG induces a shift in preference of sugary foods. However, it is not known whether this shift is due to a change in the threshold for gustatory detection of sucrose, or whether LSG induces behavioral change without affecting the gustatory threshold for sugar. The objective of this study was to determine whether adolescents who undergo LSG experience a change in their threshold for detecting sweet taste. We studied the sucrose detection threshold of 14 obese adolescents (age 15.3 ± 0.5 years, range 12-18) who underwent LSG 2 weeks before surgery and at 12 and 52 weeks after surgery. Matched non-surgical subjects were tested on two occasions 12 weeks apart to control for potential learning of the test that may have confounded the results. Seven sucrose concentrations were used and were tested in eight blocks with each block consisting of a random seven sucrose and seven water stimuli. The subjects were asked to report whether the sample contained water or not after they tasted 15 ml of the fluid for 10 s. The bodyweight of the LSG group decreased from 136.7 ± 5.4 to 109.6 ± 5.1 and 86.5 ± 4.0 kg after 12 and 52 weeks, respectively (p < 0.001). There was no significant difference after surgery in taste detection threshold of patients after LSG (p = 0.60), and no difference was observed comparing the taste detection threshold of the LSG group with the non-surgical controls (p = 0.38). LSG did not affect the taste detection threshold for sucrose, suggesting that the shift in preference for sugary foods may be due to factors other than fundamental changes in taste sensitivity.
NASA Astrophysics Data System (ADS)
Monakhov, A. A.; Chernyavski, V. M.; Shtemler, Yu.
2013-09-01
Bounds of cavitation inception are experimentally determined in a creeping flow between eccentric cylinders, the inner one being static and the outer rotating at a constant angular velocity, Ω. The geometric configuration is additionally specified by a small minimum gap between cylinders, H, as compared with the radii of the inner and outer cylinders. For some values H and Ω, cavitation bubbles are observed, which are collected on the surface of the inner cylinder and equally distributed over the line parallel to its axis near the downstream minimum gap position. Cavitation occurs for the parameters {H,Ω} within a region bounded on the right by the cavitation inception curve that passes through the plane origin and cannot exceed the asymptotic threshold value of the minimum gap, Ha, in whose vicinity cavitation may occur at H < Ha only for high angular rotation velocities.
Choice of Grating Orientation for Evaluation of Peripheral Vision
Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda
2016-01-01
ABSTRACT Purpose Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Methods Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. Results For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Conclusions Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation. PMID:26889822
Choice of Grating Orientation for Evaluation of Peripheral Vision.
Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda
2016-06-01
Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation.
NASA Astrophysics Data System (ADS)
Hu, Hang; Yu, Hong; Zhang, Yongzhi
2013-03-01
Cooperative spectrum sensing, which can greatly improve the ability of discovering the spectrum opportunities, is regarded as an enabling mechanism for cognitive radio (CR) networks. In this paper, we employ a double threshold detection method in energy detector to perform spectrum sensing, only the CR users with reliable sensing information are allowed to transmit one bit local decision to the fusion center. Simulation results will show that our proposed double threshold detection method could not only improve the sensing performance but also save the bandwidth of the reporting channel compared with the conventional detection method with one threshold. By weighting the sensing performance and the consumption of system resources in a utility function that is maximized with respect to the number of CR users, it has been shown that the optimal number of CR users is related to the price of these Quality-of-Service (QoS) requirements.
Theoretical detection threshold of the proton-acoustic range verification technique.
Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei
2015-10-01
Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1-10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. The calculated noise in the transducer was 12-28 mPa, depending on the transducer central frequency (70-380 kHz). The minimum number of protons detectable by the technique was on the order of 3-30 × 10(6) per pulse, with 30-800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10(6) protons/pulse and beam current.
Theoretical detection threshold of the proton-acoustic range verification technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Moiz; Yousefi, Siavash; Xing, Lei, E-mail: lei@stanford.edu
2015-10-15
Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method.more » Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 10{sup 6} per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10{sup 6} protons/pulse and beam current.« less
Theoretical detection threshold of the proton-acoustic range verification technique
Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei
2015-01-01
Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 106 per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 106 protons/pulse and beam current. PMID:26429247
Minimum duration of actigraphy-defined nocturnal awakenings necessary for morning recall.
Winser, Michael A; McBean, Amanda L; Montgomery-Downs, Hawley E
2013-07-01
Healthy adults awaken between each sleep cycle approximately 5 times each night but generally do not remember all of these awakenings in the morning. A rule of thumb has arisen in the sleep field that approximately 5 min of continuous wakefulness are required to form a memory for an awakening. However, few studies have examined memory for these sleep-wake transitions and none have done so in the home, while participants follow their normal routine. Self-report and actigraphy were used in the participant's home environment to determine the minimum duration of an awakening necessary for morning recall for each of the 39 healthy adults. Recall thresholds ranged from 30 to 600 s with a mean of 259 s (4 min 19 s) and were negatively associated with sleep efficiency but not significantly associated with total sleep time, age, income, or education. There also was a sex by cohabitation interaction, with single men having lower thresholds than single women and cohabiting participants, which was explained by higher sleep efficiency in noncohabitating men. Large individual differences suggest that many factors may influence recall threshold. Our preliminary study is the first to calculate the duration of wakefulness necessary for morning recall of nocturnal awakenings and the first to use a field-based design, allowing for the study of habitual sleep patterns at the participant's home. Further study is needed to explore if recall thresholds calculated using actigraphy can be validated against polysomnography (PSG) or be used to guide potential treatments. Copyright © 2013 Elsevier B.V. All rights reserved.
Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping
2016-02-01
Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-04-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-01-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259
Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers
Situ, Ping; Simpson, Trefford; Begley, Carolyn
2016-01-01
Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090
Effect of mental stress on cold pain in chronic tension-type headache sufferers.
Cathcart, Stuart; Winefield, Anthony H; Lushington, Kurt; Rolan, Paul
2009-10-01
Mental stress is a noted contributing factor in chronic tension-type headache (CTH), however the mechanisms underlying this are not clearly understood. One proposition is that stress aggravates already increased pain sensitivity in CTH sufferers. This hypothesis could be partially tested by examining effects of mental stress on threshold and supra-threshold experimental pain processing in CTH sufferers. Such studies have not been reported to date. The present study measured pain detection and tolerance thresholds and ratings of supra-threshold pain stimulation from cold pressor test in CTH sufferers (CTH-S) and healthy Control (CNT) subjects exposed to a 60-min stressful mental task, and in CTH sufferers exposed to a 60-min neutral condition (CTH-N). Headache sufferers had lower pain tolerance thresholds and increased pain intensity ratings compared to controls. Pain detection and tolerance thresholds decreased and pain intensity ratings increased during the stress task, with a greater reduction in pain detection threshold and increase in pain intensity ratings in the CTH-S compared to CNT group. The results support the hypothesis that mental stress contributes to CTH through aggravating already increased pain sensitivity in CTH sufferers.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
21 CFR 886.1050 - Adaptometer (biophotometer).
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Adaptometer (biophotometer). 886.1050 Section 886.1050 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... (regeneration of the visual purple) and the minimum light threshold. (b) Classification. Class I (general...
Issues with fruit dietary supplements in the US - authentication by anthocyanin
USDA-ARS?s Scientific Manuscript database
Current fruit-based dietary supplements in the US marketplace have no obligation to meet any fruit-component concentration requirement. For example, berry supplements might be promoted for their high anthocyanin content, but they actually have no standard or minimum anthocyanin threshold for legal s...
The field gamma measurements within the excavation during the excavation process did not exceed the instrument threshold previously stated and ranged from a minimum of 1,100 cpm to a maximum of 2,100cpm shielded.
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich
2016-04-15
The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. Copyright © 2016 Elsevier B.V. All rights reserved.
Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538
Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.
Hoover, Adria E N; Elzein, Yasmeenah; Harris, Laurence R
2016-07-01
Right-handed people show an advantage in detecting a delay in visual feedback concerning an active movement of their right hand when it is viewed in a natural perspective compared to when it is seen as if viewing another person's hand (Hoover and Harris in Exp Brain Res 233:1053-1060, 2012. doi: 10.1007/s00221-014-4181-9 ; Exp Brain Res 222:389-397, 2015a. doi: 10.1007/s00221-012-3224-3 ). This self-advantage is unique to their dominant hand and may reflect an enhanced sense of ownership which contributes to how right-handed people relate to the world. Here we asked whether left-handers show the same pattern of performance for their dominant hand. We measured the minimum delay that could be detected by 29 left-handers when viewing either their dominant or non-dominant hand from 'self' or 'other' perspectives and compared their thresholds to an age-matched sample of 22 right-handers. Right-handers showed a significant signature self-advantage of 19 ms when viewing their dominant hand in an expected 'self' perspective compared to 'other' perspectives. Left-handers, however, showed no such advantage for either their dominant or non-dominant hand. This lack of self-advantage in detecting delayed visual feedback might indicate a less secure sense of body ownership amongst left-handers.
Rapid bacteriological screening of cosmetic raw materials by using bioluminescence.
Nielsen, P; Van Dellen, E
1989-01-01
Incoming cosmetic raw materials are routinely tested for microbial content. Standard plate count methods require up to 72 h. A rapid, sensitive, and inexpensive raw material screening method was developed that detects the presence of bacteria by means of ATP (bioluminescence). With a 24-h broth enrichment, the minimum bacterial ATP detection threshold of 1 cfu/g sample can be achieved using purified firefly luciferin-luciferase and an ATP releasing reagent. By using this rapid screen, microbiologically free material may be released for production within 24 h, while contaminated material undergoes further quantitative and identification testing. In order for a raw material to be validated for this method it must be evaluated for (1) a potential nonmicrobial light-contributing reaction resulting in a false positive or, (2) degradation of the ATP giving a false negative, and (3) confirmation that the raw material has not overwhelmed the buffering capacity of the enrichment broth. The key criteria for a rapid screen was the sensitivity to detect less than one colony forming unit per g product, the speed to do this within 24 h, and cost efficiency. Bioluminescence meets these criteria. With an enrichment step, it can detect less than one cfu/g sample. After the enrichment step, analysis time per sample is approximately 2 min and the cost for material and reagents is less than one dollar per sample.
Ray, Laura B.; Sockeel, Stéphane; Soon, Melissa; Bore, Arnaud; Myhr, Ayako; Stojanoski, Bobby; Cusack, Rhodri; Owen, Adrian M.; Doyon, Julien; Fogel, Stuart M.
2015-01-01
A spindle detection method was developed that: (1) extracts the signal of interest (i.e., spindle-related phasic changes in sigma) relative to ongoing “background” sigma activity using complex demodulation, (2) accounts for variations of spindle characteristics across the night, scalp derivations and between individuals, and (3) employs a minimum number of sometimes arbitrary, user-defined parameters. Complex demodulation was used to extract instantaneous power in the spindle band. To account for intra- and inter-individual differences, the signal was z-score transformed using a 60 s sliding window, per channel, over the course of the recording. Spindle events were detected with a z-score threshold corresponding to a low probability (e.g., 99th percentile). Spindle characteristics, such as amplitude, duration and oscillatory frequency, were derived for each individual spindle following detection, which permits spindles to be subsequently and flexibly categorized as slow or fast spindles from a single detection pass. Spindles were automatically detected in 15 young healthy subjects. Two experts manually identified spindles from C3 during Stage 2 sleep, from each recording; one employing conventional guidelines, and the other, identifying spindles with the aid of a sigma (11–16 Hz) filtered channel. These spindles were then compared between raters and to the automated detection to identify the presence of true positives, true negatives, false positives and false negatives. This method of automated spindle detection resolves or avoids many of the limitations that complicate automated spindle detection, and performs well compared to a group of non-experts, and importantly, has good external validity with respect to the extant literature in terms of the characteristics of automatically detected spindles. PMID:26441604
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
Kastelein, Ronald A; Hoek, Lean; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F
2010-02-01
The underwater hearing sensitivities of two 2-year-old female harbor seals were quantified in a pool built for acoustic research by using a behavioral psycho-acoustic technique. The animals were trained only to respond when they detected an acoustic signal ("go/no-go" response). Detection thresholds were obtained for pure tone signals (frequencies: 0.2-40 kHz; durations: 0.5-5000 ms, depending on the frequency; 59 frequency-duration combinations). Detection thresholds were quantified by varying the signal amplitude by the 1-up, 1-down staircase method, and were defined as the stimulus levels, resulting in a 50% detection rate. The hearing thresholds of the two seals were similar for all frequencies except for 40 kHz, for which the thresholds differed by, on average, 3.7 dB. There was an inverse relationship between the time constant (tau), derived from an exponential model of temporal integration, and the frequency [log(tau)=2.86-0.94 log(f);tau in ms and f in kHz]. Similarly, the thresholds increased when the pulse was shorter than approximately 780 cycles (independent of the frequency). For pulses shorter than the integration time, the thresholds increased by 9-16 dB per decade reduction in the duration or number of cycles in the pulse. The results of this study suggest that most published hearing thresholds
de la Llave-Rincón, Ana Isabel; Fernández-de-las-Peñas, César; Fernández-Carnero, Josué; Padua, Luca; Arendt-Nielsen, Lars; Pareja, Juan A
2009-10-01
The aim of the current study was to evaluate bilaterally warm/cold detection and heat/cold pain thresholds over the hand/wrist in patients with carpal tunnel syndrome (CTS). A total of 25 women with strictly unilateral CTS (mean 42 +/- 10 years), and 20 healthy matched women (mean 41 +/- 8 years) were recruited. Warm/cold detection and heat/cold pain thresholds were assessed bilaterally over the carpal tunnel and the thenar eminence in a blinded design. Self-reported measures included both clinical pain history (intensity, location and area) and Boston Carpal Tunnel Questionnaire. No significant differences between groups for both warm and cold detection thresholds in either carpal tunnel or thenar eminence (P > 0.5) were found. Further, significant differences between groups, but not between sides, for both heat and cold pain thresholds in both the carpal tunnel and thenar eminence were found (all P < 0.001). Heat pain thresholds (P < 0.01) were negatively correlated, whereas cold pain thresholds (P < 0.001) were positively correlated with hand pain intensity and duration of symptoms. Our findings revealed bilateral thermal hyperalgesia (lower heat pain and reduced cold pain thresholds) but not hypoesthesia (normal warm/cold detection thresholds) in patients with strictly unilateral CTS when compared to controls. We suggest that bilateral heat and cold hyperalgesia may reflect impairments in central nociceptive processing in patients with unilateral CTS. The bilateral thermal hyperalgesia associated with pain intensity and duration of pain history supports a role of generalized sensitization mechanisms in the initiation, maintenance and spread of pain in CTS.
Facial arthralgia and myalgia: can they be differentiated by trigeminal sensory assessment?
Eliav, Eli; Teich, Sorin; Nitzan, Dorit; El Raziq, Daood Abid; Nahlieli, Oded; Tal, Michael; Gracely, Richard H; Benoliel, Rafael
2003-08-01
Heat and electrical detection thresholds were assessed in 72 patients suffering from painful temporomandibular disorder. Employing widely accepted criteria, 44 patients were classified as suffering from temporomandibular joint (TMJ) arthralgia (i.e. pain originating from the TMJ) and 28 from myalgia (i.e. pain originating from the muscles of mastication). Electrical stimulation was employed to assess thresholds in large myelinated nerve fibers (Abeta) and heat application to assess thresholds in unmyelinated nerve fibers (C). The sensory tests were performed bilaterally in three trigeminal nerve sites: the auriculotemporal nerve territory (AUT), buccal nerve territory (BUC) and the mental nerve territory (MNT). In addition, 22 healthy asymptomatic controls were examined. A subset of ten arthralgia patients underwent arthrocentesis and electrical detection thresholds were additionally assessed following the procedure. Electrical detection threshold ratios were calculated by dividing the affected side by the control side, thus reduced ratios indicate hypersensitivity of the affected side. In control patients, ratios obtained at all sites did not vary significantly from the expected value of 'one' (mean with 95% confidence intervals; AUT, 1:0.95-1.06; BUC, 1.01:0.93-1.11; MNT, 0.97:0.88-1.05, all areas one sample analysis P>0.05). In arthralgia patients mean ratios (+/-SEM) obtained for the AUT territory (0.63+/-0.03) were significantly lower compared to ratios for the MNT (1.02+/-0.03) and BUC (0.96+/-0.04) territories (repeated measures analysis of variance (RANOVA), P<0.0001) and compared to the AUT ratios in myalgia (1.27+/-0.09) and control subjects (1+/-0.06, ANOVA, P<0.0001). In the myalgia group the electrical detection threshold ratios in the AUT territory were significantly elevated compared to the AUT ratios in control subjects (Dunnett test, P<0.05), but only approached statistical significance compared to the MNT (1.07+/-0.04) and BUC (1.11+/-0.06) territories (RANOVA, F(2,27)=3.12, P=0.052). There were no significant differences between and within the groups for electrical detection threshold ratios in the BUC and MNT nerve territories, and for the heat detection thresholds in all tested sites. Following arthrocentesis, mean electrical detection threshold ratios in the AUT territory were significantly elevated from 0.64+/-0.06 to 0.99+/-0.04 indicating resolution of the hypersensitivity (paired t-test, P=0.001). In conclusion, large myelinated fiber hypersensitivity is found in the skin overlying TMJs with clinical pain and pathology but is not found in controls. In patients with muscle-related facial pain there was significant elevation of the electrical detection threshold in the AUT region.
Mohammed, M I; Desmulliez, M P Y
2014-11-15
Cardiovascular diseases are the most prevalent medical conditions affecting the modern world, reducing the quality of life for those affected and causing an ever increasing burden on clinical resources. Cardiac biomarkers are crucial in the diagnosis and management of patient outcomes. In that respect, such proteins are desirable to be measured at the point of care, overcoming the shortcomings of current instrumentation. We present a CO2 laser engraving technique for the rapid prototyping of a polymeric autonomous capillary system with embedded on-chip planar lenses and biosensing elements, the first step towards a fully miniaturised and integrated cardiac biosensing platform. The system has been applied to the detection of cardiac Troponin I, the gold standard biomarker for the diagnosis of acute myocardial infarction. The devised lab-on-a-chip device was demonstrated to have 24 pg/ml limit of detection, which is well within the minimum threshold for clinically applicable concentrations. Assays were completed within approximately 7-9 min. Initial results suggest that, given the portability, low power consumption and high sensitivity of the device, this technology could be developed further into point of care instrumentation useful in the diagnosis of various forms of cardiovascular diseases. Copyright © 2014 Elsevier B.V. All rights reserved.
[Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].
Voigtländer, S; Deiters, T
2015-12-01
Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.
Salt taste after bariatric surgery and weight loss in obese persons
Maedge, Julia; Lam, Linda; Blasche, Gerhard; Shakeri-Leidenmühler, Soheila; Kundi, Michael; Ludvik, Bernhard; Langer, Felix B.; Prager, Gerhard; Schindler, Karin; Dürrschmid, Klaus
2016-01-01
Background. Little is known about the perception of salty taste in obese patients, especially after bariatric surgery. Therefore, the aim of this study was to analyse possible differences in salt detection thresholds and preferences for foods differing in salt content in obese persons before and after bariatric surgery with weight loss compared to non-obese individuals. Methods. Sodium chloride detection thresholds and liking for cream soups with different salt concentrations were studied with established tests. Moreover, a brief salt food questionnaire was assessed to identify the usage and awareness of salt in food. Results. The results showed similar mean sodium chloride detection thresholds between non-obese and obese participants. After bariatric surgery a non-significant increase in the salt detection threshold was observed in the obese patients (mean ± SD: 0.44 ± 0.24 g NaCl/L before OP vs. 0.64 ± 0.47 g NaCl/L after OP, p = 0.069). Cream soup liking between controls and obese patients were not significantly different. However, significant sex specific differences were detected with the tested women not liking the soups (p < 0.001). Results from the food questionnaire were similar between the groups. Conclusion. No differences between non-obese persons and obese patients were shown regarding the salt detection threshold. However, due to highly significant differences in soup liking, sex should be taken into consideration when conducting similar sensory studies. PMID:27330856
Effects of environmental conditions on onset of xylem growth in Pinus sylvestris under drought.
Swidrak, Irene; Gruber, Andreas; Kofler, Werner; Oberhuber, Walter
2011-05-01
We determined the influence of environmental factors (air and soil temperature, precipitation, photoperiod) on onset of xylem growth in Scots pine (Pinus sylvestris L.) within a dry inner Alpine valley (750 m a.s.l., Tyrol, Austria) by repeatedly sampling micro-cores throughout 2007-10 at two sites (xeric and dry-mesic) at the start of the growing season. Temperature sums were calculated in degree-days (DD) ≥5 °C from 1 January and 20 March, i.e., spring equinox, to account for photoperiodic control of release from winter dormancy. Threshold temperatures at which xylogenesis had a 0.5 probability of being active were calculated by logistic regression. Onset of xylem growth, which was not significantly different between the xeric and dry-mesic sites, ranged from mid-April in 2007 to early May in 2008. Among most study years, statistically significant differences (P<0.05) in onset of xylem growth were detected. Mean air temperature sums calculated from 1 January until onset of xylem growth were 230 ± 44 DD (mean ± standard deviation) at the xeric site and 205 ± 36 DD at the dry-mesic site. Temperature sums calculated from spring equinox until onset of xylem growth showed somewhat less variability during the 4-year study period, amounting to 144 ± 10 and 137 ± 12 DD at the xeric and dry-mesic sites, respectively. At both sites, xylem growth was active when daily minimum, mean and maximum air temperatures were 5.3, 10.1 and 16.2 °C, respectively. Soil temperature thresholds and DD until onset of xylem growth differed significantly between sites, indicating minor importance of root-zone temperature for onset of xylem growth. Although spring precipitation is known to limit radial growth in P. sylvestris exposed to a dry inner Alpine climate, the results of this study revealed that (i) a daily minimum air temperature threshold for onset of xylem growth in the range 5-6 °C exists and (ii) air temperature sum rather than precipitation or soil temperature triggers start of xylem growth. Based on these findings, we suggest that drought stress forces P. sylvestris to draw upon water reserves in the stem for enlargement of first tracheids after cambial resumption in spring. © The Author 2011. Published by Oxford University Press. All rights reserved.
Effects of environmental conditions on onset of xylem growth in Pinus sylvestris under drought
Swidrak, Irene; Gruber, Andreas; Kofler, Werner; Oberhuber, Walter
2012-01-01
Summary We determined influence of environmental factors (air and soil temperature, precipitation, photoperiod) on onset of xylem growth in Scots pine (Pinus sylvestris L.) within a dry inner Alpine valley (750 m a.s.l., Tyrol, Austria) by repeatedly sampling micro-cores throughout 2007-2010 at two sites (xeric and dry-mesic) at the start of the growing season. Temperature sums were calculated in degree-days (DD) ≥ 5 °C from 1 January and 20 March, i.e. spring equinox, to account for photoperiodic control of release from winter dormancy. Threshold temperatures at which xylogenesis had a 0.5 probability of being active were calculated by logistic regression. Onset of xylem growth, which was not significantly different between the xeric and dry-mesic site, ranged from mid-April in 2007 to early May in 2008. Among most study years statistically significant differences (P < 0.05) in onset of xylem growth were detected. Mean air temperature sums calculated from 1 January until onset of xylem growth were 230 ± 44 DD (mean ± standard deviation) at the xeric and 205 ± 36 DD at the dry-mesic site. Temperature sums calculated from spring equinox until onset of xylem growth showed quite less variability during the four year study period amounting to 144 ± 10 and 137 ± 12 DD at the xeric and dry-mesic site, respectively. At both sites xylem growth was active when daily minimum, mean and maximum air temperatures were 5.3, 10.1 and 16.2 °C, respectively. Soil temperature thresholds and DD until onset of xylem growth differed significantly between sites indicating minor importance of root-zone temperature for onset of xylem growth. Although spring precipitation is known to limit radial growth in P. sylvestris exposed to dry inner Alpine climate, results of this study revealed that (i) a daily minimum air temperature threshold for onset of xylem growth in the range of 5-6 °C exists and (ii) air temperature sum rather than precipitation or soil temperature triggers start of xylem growth. Based on these findings we suggest that drought stress forces P. sylvestris to draw upon water reserves in the stem for enlargement of first tracheids after cambial resumption in spring. PMID:21593011
NASA Astrophysics Data System (ADS)
Karlsson, Karl-Göran; Håkansson, Nina
2018-02-01
The sensitivity in detecting thin clouds of the cloud screening method being used in the CM SAF cloud, albedo and surface radiation data set from AVHRR data (CLARA-A2) cloud climate data record (CDR) has been evaluated using cloud information from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the CALIPSO satellite. The sensitivity, including its global variation, has been studied based on collocations of Advanced Very High Resolution Radiometer (AVHRR) and CALIOP measurements over a 10-year period (2006-2015). The cloud detection sensitivity has been defined as the minimum cloud optical thickness for which 50 % of clouds could be detected, with the global average sensitivity estimated to be 0.225. After using this value to reduce the CALIOP cloud mask (i.e. clouds with optical thickness below this threshold were interpreted as cloud-free cases), cloudiness results were found to be basically unbiased over most of the globe except over the polar regions where a considerable underestimation of cloudiness could be seen during the polar winter. The overall probability of detecting clouds in the polar winter could be as low as 50 % over the highest and coldest parts of Greenland and Antarctica, showing that a large fraction of optically thick clouds also remains undetected here. The study included an in-depth analysis of the probability of detecting a cloud as a function of the vertically integrated cloud optical thickness as well as of the cloud's geographical position. Best results were achieved over oceanic surfaces at mid- to high latitudes where at least 50 % of all clouds with an optical thickness down to a value of 0.075 were detected. Corresponding cloud detection sensitivities over land surfaces outside of the polar regions were generally larger than 0.2 with maximum values of approximately 0.5 over the Sahara and the Arabian Peninsula. For polar land surfaces the values were close to 1 or higher with maximum values of 4.5 for the parts with the highest altitudes over Greenland and Antarctica. It is suggested to quantify the detection performance of other CDRs in terms of a sensitivity threshold of cloud optical thickness, which can be estimated using active lidar observations. Validation results are proposed to be used in Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulation Package (COSP) simulators for cloud detection characterization of various cloud CDRs from passive imagery.
MEDIPIX: a VLSI chip for a GaAs pixel detector for digital radiology
NASA Astrophysics Data System (ADS)
Amendolia, S. R.; Bertolucci, E.; Bisogni, M. G.; Bottigli, U.; Ceccopieri, A.; Ciocci, M. A.; Conti, M.; Delogu, P.; Fantacci, M. E.; Maestro, P.; Marzulli, V.; Pernigotti, E.; Romeo, N.; Rosso, V.; Rosso, P.; Stefanini, A.; Stumbo, S.
1999-02-01
A GaAs pixel detector designed for digital mammography, equipped with a 36-channel single photon counting discrete read-out electronics, was tested using a test object developed for quality control purposes in mammography. Each pixel was 200×200 μm 2 large, and 200 μm deep. The choice of GaAs with respect to silicon (largely used in other applications and with a more established technique) has been made because of the much better detection efficiency at mammographic energies, combined with a very good charge collection efficiency achieved thanks to new ohmic contacts. This GaAs detector is able to perform a measurement of low-contrast details, with minimum contrast lower (nearly a factor two) than that typically achievable with standard mammographic film+screen systems in the same conditions of clinical routine. This should allow for an earlier diagnosis of breast tumour masses. Due to these encouraging results, the next step in the evolution of our imaging system based on GaAs detectors has been the development of a VLSI front-end prototype chip (MEDIPIX ) in order to cover a much larger diagnostic area. The chip reads 64×64 channels in single photon counting mode, each one 170 μm wide. Each channel contains also a test input where a signal can be simulated, injecting a known charge through a 16 f F capacitor. Fake signals have been injected via the test input measuring and equalizing minimum thresholds for all the channels. On an average, in most of the performing chips available up to now, we have found that it is possible to set a threshold as low as 1800 electrons with an RMS of 150 electrons (10 standard deviations lower than the 20 keV photon signal roughly equivalent to 4500 electrons). The detector, bump-bonded to the chip, will be tested and a ladder of detectors will be prepared to be able to scan large surface objects.
Comparing the effects of age on amplitude modulation and frequency modulation detection.
Wallaert, Nicolas; Moore, Brian C J; Lorenzi, Christian
2016-06-01
Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured at 40 dB sensation level for young (22-28 yrs) and older (44-66 yrs) listeners with normal audiograms for a carrier frequency of 500 Hz and modulation rates of 2 and 20 Hz. The number of modulation cycles, N, varied between 2 and 9. For FM detection, uninformative AM at the same rate as the FM was superimposed to disrupt excitation-pattern cues. For both groups, AM and FM detection thresholds were lower for the 2-Hz than for the 20-Hz rate, and AM and FM detection thresholds decreased with increasing N. Thresholds were higher for older than for younger listeners, especially for FM detection at 2 Hz, possibly reflecting the effect of age on the use of temporal-fine-structure cues for 2-Hz FM detection. The effect of increasing N was similar across groups for both AM and FM. However, at 20 Hz, older listeners showed a greater effect of increasing N than younger listeners for both AM and FM. The results suggest that ageing reduces sensitivity to both excitation-pattern and temporal-fine-structure cues for modulation detection, but more so for the latter, while sparing temporal integration of these cues at low modulation rates.
Anderson, Elizabeth S.; Oxenham, Andrew J.; Nelson, Peggy B.; Nelson, David A.
2012-01-01
Measures of spectral ripple resolution have become widely used psychophysical tools for assessing spectral resolution in cochlear-implant (CI) listeners. The objective of this study was to compare spectral ripple discrimination and detection in the same group of CI listeners. Ripple detection thresholds were measured over a range of ripple frequencies and were compared to spectral ripple discrimination thresholds previously obtained from the same CI listeners. The data showed that performance on the two measures was correlated, but that individual subjects’ thresholds (at a constant spectral modulation depth) for the two tasks were not equivalent. In addition, spectral ripple detection was often found to be possible at higher rates than expected based on the available spectral cues, making it likely that temporal-envelope cues played a role at higher ripple rates. Finally, spectral ripple detection thresholds were compared to previously obtained speech-perception measures. Results confirmed earlier reports of a robust relationship between detection of widely spaced ripples and measures of speech recognition. In contrast, intensity difference limens for broadband noise did not correlate with spectral ripple detection measures, suggesting a dissociation between the ability to detect small changes in intensity across frequency and across time. PMID:23231122
Final comprehensive report of overall activities of AEC contract AT(30-1)- 3269 from its initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1973-01-01
Research accomplishments are reported for the following projects: determination of the minimum level of x radiation in rats to alter the taste threshold; determination of the permanency of such alteration; determination of the dose and time dependency of the alteration; changes in hypothalamic function following low doses of ionizing radiation; development of new behavioral technique for determination of taste thresholds; correlation of taste sensitivity changes with alteration in taste bud morphology; effects of olfaction on taste thresholds; properties of taste material that influence x radiation effects on taste; determination of effects of in utero x-irradiation on taste function in themore » adult rat; and effects of ingestion of heavy metals on taste acuity and response of taste sensitivity to x radiation. (HLW)« less
On computational Gestalt detection thresholds.
Grompone von Gioi, Rafael; Jakubowicz, Jérémie
2009-01-01
The aim of this paper is to show some recent developments of computational Gestalt theory, as pioneered by Desolneux, Moisan and Morel. The new results allow to predict much more accurately the detection thresholds. This step is unavoidable if one wants to analyze visual detection thresholds in the light of computational Gestalt theory. The paper first recalls the main elements of computational Gestalt theory. It points out a precision issue in this theory, essentially due to the use of discrete probability distributions. It then proposes to overcome this issue by using continuous probability distributions and illustrates it on the meaningful alignment detector of Desolneux et al.
Bernstein, Leslie R; Trahiotis, Constantine
2016-11-01
This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.
Pool desiccation and developmental thresholds in the common frog, Rana temporaria.
Lind, Martin I; Persbo, Frida; Johansson, Frank
2008-05-07
The developmental threshold is the minimum size or condition that a developing organism must have reached in order for a life-history transition to occur. Although developmental thresholds have been observed for many organisms, inter-population variation among natural populations has not been examined. Since isolated populations can be subjected to strong divergent selection, population divergence in developmental thresholds can be predicted if environmental conditions favour fast or slow developmental time in different populations. Amphibian metamorphosis is a well-studied life-history transition, and using a common garden approach we compared the development time and the developmental threshold of metamorphosis in four island populations of the common frog Rana temporaria: two populations originating from islands with only temporary breeding pools and two from islands with permanent pools. As predicted, tadpoles from time-constrained temporary pools had a genetically shorter development time than those from permanent pools. Furthermore, the variation in development time among females from temporary pools was low, consistent with the action of selection on rapid development in this environment. However, there were no clear differences in the developmental thresholds between the populations, indicating that the main response to life in a temporary pool is to shorten the development time.
75 FR 68430 - Domestic Shipping Services Pricing and Mailing Standards Changes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... has eliminated the requirement for a postal routing barcode when paying postage with permit imprint... minimum volume threshold applies, except the permit imprint requirement of 200 pieces or 50 pounds of mail... permit imprint. Customers using USPS-approved IBI postage meters that print the IBI with the appropriate...
When Benefits Are Difficult to Measure.
ERIC Educational Resources Information Center
Birdsall, William C.
1987-01-01
It is difficult to apply benefit cost analysis to human service programs. This paper explains "threshold benefit analysis," the derivation of the minimum dollar value which the benefits must attain in order for their value to equal the intervention costs. The method is applied to a mobility training program. (BS)
Lake Shore Drive\\\\Grand Ave.\\\\E. Illinois Ave., May 2016, Lindsay Light Radiological Survey
Field gamma measurements process did not exceed the instrument threshold. Unshielded readings ranged from a minimumof 4,600 cpm to a maximum of 14,200 cpm unshielded and shielded readings ranged from a minimum of1,800 cpm to 2,680 cpm.
13 CFR 315.7 - Certification requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Minimum certification thresholds. (1) Twelve-month decline. Based upon a comparison of the most recent 12... percent of the total production or sales of the Firm during the 12-month period preceding the most recent...-month versus twenty-four month decline. Based upon a comparison of the most recent 12-month period for...
13 CFR 315.7 - Certification requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Minimum certification thresholds. (1) Twelve-month decline. Based upon a comparison of the most recent 12... percent of the total production or sales of the Firm during the 12-month period preceding the most recent...-month versus twenty-four month decline. Based upon a comparison of the most recent 12-month period for...
13 CFR 315.7 - Certification requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Minimum certification thresholds. (1) Twelve-month decline. Based upon a comparison of the most recent 12... percent of the total production or sales of the Firm during the 12-month period preceding the most recent...-month versus twenty-four month decline. Based upon a comparison of the most recent 12-month period for...
13 CFR 315.7 - Certification requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Minimum certification thresholds. (1) Twelve-month decline. Based upon a comparison of the most recent 12... percent of the total production or sales of the Firm during the 12-month period preceding the most recent...-month versus twenty-four month decline. Based upon a comparison of the most recent 12-month period for...
13 CFR 315.7 - Certification requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Minimum certification thresholds. (1) Twelve-month decline. Based upon a comparison of the most recent 12... percent of the total production or sales of the Firm during the 12-month period preceding the most recent...-month versus twenty-four month decline. Based upon a comparison of the most recent 12-month period for...
Kitchener, Henry C; Gittins, Matthew; Desai, Mina; Smith, John H F; Cook, Gary; Roberts, Chris; Turnbull, Lesley
2015-03-01
Liquid-based cytology (LBC) for cervical screening would benefit from laboratory practice guidelines that define specimen adequacy for reporting of slides. The evidence base required to define cell adequacy should incorporate both ThinPrep™ (TP; Hologic, Inc., Bedford, MA, USA) and SurePath™ (SP; BD Diagnostics, Burlington, NC, USA), the two LBC systems used in the UK cervical screening programmes. The objectives of this study were to determine (1) current practice for reporting LBC in England, Wales and Scotland, (2) a reproducible method for cell counting, (3) the cellularity of slides classified as inadequate, negative or abnormal and (4) the impact of varying cellularity on the likelihood of detecting cytological abnormalities. The study involved four separate arms to pursue each of the four objectives. (1) A questionnaire survey of laboratories was conducted. (2) A standard counting protocol was developed and used by three experienced cytopathologists to determine a reliable and reproducible cell counting method. (3) Slide sets which included a range of cytological abnormalities were each sent to three laboratories for cell counting to study the correlation between cell counts and reported cytological outcomes. (4) Dilution of LBC samples by fluid only (unmixed) or by dilution with a sample containing normal cells (mixed) was performed to study the impact on reporting of reducing either the total cell count or the relative proportion of abnormal to normal cells. The study was conducted within the cervical screening programmes in England, Wales and Scotland, using routinely obtained cervical screening samples, and in 56 participating NHS cervical cytology laboratories. The study involved only routinely obtained cervical screening samples. There was no clinical intervention. The main outcome measures were (1) reliability of counting method, (2) correlation of reported cytology grades with cellularity and (3) levels of detection of abnormal cells in progressively diluted cervical samples. Laboratory practice varied in terms of threshold of cellular adequacy and of morphological markers of adequacy. While SP laboratories generally used a minimum acceptable cell count (MACC) of 15,000, the MACC employed by TP laboratories varied between 5000 and 15,000. The cell counting study showed that a standard protocol achieved moderate to strong inter-rater reproducibility. Analysis of slide reporting from laboratories revealed that a large proportion of the samples reported as inadequate had cell counts above a threshold of 15,000 for SP, and 5000 and 10,000 for TP. Inter-rater unanimity was greater among more cellular preparations. Dilution studies demonstrated greater detection of abnormalities in slides with counts above the MACC and among slides with more than 25 dyskaryotic cells. Variation in laboratory practice demonstrates a requirement for evidence-based standards for designating a MACC. This study has indicated that a MACC of 15,000 and 5000 for SP and TP, respectively, achieves a balance in terms of maintaining sensitivity and low inadequacy rates. The findings of this study should inform the development of laboratory practice guidelines. The National Institute for Health Research Health Technology Assessment programme.
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
On-Orbit Reconfigurable Solar Array
NASA Technical Reports Server (NTRS)
Levy, Robert K. (Inventor)
2017-01-01
In one or more embodiments, the present disclosure teaches a method for reconfiguring a solar array. The method involves providing, for the solar array, at least one string of solar cells. The method further involves deactivating at least a portion of at least one of the strings of solar cells of the solar array when power produced by the solar array reaches a maximum power allowance threshold. In addition, the method involves activating at least a portion of at least one of the strings of the solar cells in the solar array when the power produced by the solar array reaches a minimum power allowance threshold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubov, F. I.; Kryzhanovskaya, N. V.; Moiseev, E. I.
The spectral, threshold, and power characteristics of a microdisk laser 31 μm in diameter with an active region based on InAs/InGaAs quantum dots, operating in the continuous-wave (cw) mode at room temperature are studied. The minimum threshold current density is 0.58 kA/cm{sup 2}, the subthreshold linewidth of the whispering-gallery mode is 50 pm at a wavelength lying in the range of 1.26–1.27 μm. The total power emitted into free space reaches ~0.1 mW in the cw mode, whereas the radiation power of the whispering-gallery modes is ~2.8%.
Robson, Anthony G; Lenassi, Eva; Saihan, Zubin; Luong, Vy A; Fitzke, Fred W; Holder, Graham E; Webster, Andrew R
2012-09-14
To assess the significance and evolution of parafoveal rings of high-density fundus autofluorescence (AF) in 12 patients with retinitis pigmentosa (RP). Twelve patients with autosomal recessive RP or Usher syndrome type 2 were ascertained who had a parafoveal ring of high-density AF and a visual acuity of 20/30 or better at baseline. Photopic and scotopic fine matrix mapping (FMM) were performed to test sensitivity across the macula. AF imaging and FMM were repeated after 4 to 8 years and optical coherence tomography (OCT) performed. The size of the AF ring reduced over time and disappeared in one subject. Photopic thresholds were normal over the fovea; thresholds were elevated by 0.6 log units over the ring and by 1.2 log units external to the ring at baseline and differed by less than 0.1 log unit at follow-up. Mild photopic losses close to the internal edge of the ring were detected at baseline or follow-up in all. Mean scotopic thresholds over parafoveal areas within the ring were markedly elevated in 8 of 10 at baseline and were severely elevated in 9 of 11 at follow-up. The eccentricity of the inner edge of the AF ring corresponded closely with the lateral extent of the inner segment ellipsoid band in the OCT image. Ring constriction was largely coincident with progressive centripetal photopic threshold elevation led by worsening of rod photoreceptor function. The rate of constriction differed across patients, and a ring may reach a critical minimum before disappearing, at which stage central visual loss occurs. The structural and functional changes associated with rings of increased autofluorescence confirm that they provide an objective index of macular involvement and may aid the management of RP patients and the monitoring of future treatment efficacy.
Rueda, Marta; Moreno Saiz, Juan Carlos; Morales-Castilla, Ignacio; Albuquerque, Fabio S; Ferrero, Mila; Rodríguez, Miguel Á
2015-01-01
Ecological theory predicts that fragmentation aggravates the effects of habitat loss, yet empirical results show mixed evidences, which fail to support the theory instead reinforcing the primary importance of habitat loss. Fragmentation hypotheses have received much attention due to their potential implications for biodiversity conservation, however, animal studies have traditionally been their main focus. Here we assess variation in species sensitivity to forest amount and fragmentation and evaluate if fragmentation is related to extinction thresholds in forest understory herbs and ferns. Our expectation was that forest herbs would be more sensitive to fragmentation than ferns due to their lower dispersal capabilities. Using forest cover percentage and the proportion of this percentage occurring in the largest patch within UTM cells of 10-km resolution covering Peninsular Spain, we partitioned the effects of forest amount versus fragmentation and applied logistic regression to model occurrences of 16 species. For nine models showing robustness according to a set of quality criteria we subsequently defined two empirical fragmentation scenarios, minimum and maximum, and quantified species' sensitivity to forest contraction with no fragmentation, and to fragmentation under constant forest cover. We finally assessed how the extinction threshold of each species (the habitat amount below which it cannot persist) varies under no and maximum fragmentation. Consistent with their preference for forest habitats probability occurrences of all species decreased as forest cover contracted. On average, herbs did not show significant sensitivity to fragmentation whereas ferns were favored. In line with theory, fragmentation yielded higher extinction thresholds for two species. For the remaining species, fragmentation had either positive or non-significant effects. We interpret these differences as reflecting species-specific traits and conclude that although forest amount is of primary importance for the persistence of understory plants, to neglect the impact of fragmentation for some species can lead them to local extinction.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Phillips, Dennis P; Smith, Jennifer C
2004-01-01
We obtained data on within-channel and between-channel auditory temporal gap-detection acuity in the normal population. Ninety-five normal listeners were tested for gap-detection thresholds, for conditions in which the gap was bounded by spectrally identical, and by spectrally different, acoustic markers. Separate thresholds were obtained with the use of an adaptive tracking method, for gaps delimited by narrowband noise bursts centred on 1.0 kHz, noise bursts centred on 4.0 kHz, and for gaps bounded by a leading marker of 4.0 kHz noise and a trailing marker of 1.0 kHz noise. Gap thresholds were lowest for silent periods bounded by identical markers--'within-channel' stimuli. Gap thresholds were significantly longer for the between-channel stimulus--silent periods bounded by unidentical markers (p < 0.0001). Thresholds for the two within-channel tasks were highly correlated (R = 0.76). Thresholds for the between-channel stimulus were weakly correlated with thresholds for the within-channel stimuli (1.0 kHz, R = 0.39; and 4.0 kHz, R = 0.46). The relatively poor predictability of between-channel thresholds from the within-channel thresholds is new evidence on the separability of the mechanisms that mediate performance of the two tasks. The data confirm that the acuity difference for the tasks, which has previously been demonstrated in only small numbers of highly trained listeners, extends to a population of untrained listeners. The acuity of the between-channel mechanism may be relevant to the formation of voice-onset time-category boundaries in speech perception.
Laser-induced retinal damage thresholds for annular retinal beam profiles
NASA Astrophysics Data System (ADS)
Kennedy, Paul K.; Zuclich, Joseph A.; Lund, David J.; Edsall, Peter R.; Till, Stephen; Stuck, Bruce E.; Hollins, Richard C.
2004-07-01
The dependence of retinal damage thresholds on laser spot size, for annular retinal beam profiles, was measured in vivo for 3 μs, 590 nm pulses from a flashlamp-pumped dye laser. Minimum Visible Lesion (MVL)ED50 thresholds in rhesus were measured for annular retinal beam profiles covering 5, 10, and 20 mrad of visual field; which correspond to outer beam diameters of roughly 70, 160, and 300 μm, respectively, on the primate retina. Annular beam profiles at the retinal plane were achieved using a telescopic imaging system, with the focal properties of the eye represented as an equivalent thin lens, and all annular beam profiles had a 37% central obscuration. As a check on experimental data, theoretical MVL-ED50 thresholds for annular beam exposures were calculated using the Thompson-Gerstman granular model of laser-induced thermal damage to the retina. Threshold calculations were performed for the three experimental beam diameters and for an intermediate case with an outer beam diameter of 230 μm. Results indicate that the threshold vs. spot size trends, for annular beams, are similar to the trends for top hat beams determined in a previous study; i.e., the threshold dose varies with the retinal image area for larger image sizes. The model correctly predicts the threshold vs. spot size trends seen in the biological data, for both annular and top hat retinal beam profiles.
Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M
2014-08-01
To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, A. G.; Rath, F.; Buchholz, R.
2016-08-15
It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then doesmore » not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.« less
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Averbeck, Beate; Seitz, Lena; Kolb, Florian P; Kutz, Dieter F
2017-09-01
Sex-related differences in human thermal and pain sensitivity are the subject of controversial discussion. The goal of this study in a large number of subjects was to investigate sex differences in thermal and thermal pain perception and the thermal grill illusion (TGI) as a phenomenon reflecting crosstalk between the thermoreceptive and nociceptive systems. The thermal grill illusion is a sensation of strong, but not necessarily painful, heat often preceded by transient cold upon skin contact with spatially interlaced innocuous warm and cool stimuli. The TGI was studied in a group of 78 female and 58 male undergraduate students and was evoked by placing the palm of the right hand on the thermal grill (20/40 °C interleaved stimulus). Sex-related thermal perception was investigated by a retrospective analysis of thermal detection and thermal pain threshold data that had been measured in student laboratory courses over 5 years (776 female and 476 male undergraduate students) using the method of quantitative sensory testing (QST). To analyse correlations between thermal pain sensitivity and the TGI, thermal pain threshold and the TGI were determined in a group of 20 female and 20 male undergraduate students. The TGI was more pronounced in females than males. Females were more sensitive with respect to thermal detection and thermal pain thresholds. Independent of sex, thermal detection thresholds were dependent on the baseline temperature with a specific progression of an optimum curve for cold detection threshold versus baseline temperature. The distribution of cold pain thresholds was multi-modal and sex-dependent. The more pronounced TGI in females correlated with higher cold sensitivity and cold pain sensitivity in females than in males. Our finding that thermal detection threshold not only differs between the sexes but is also dependent on the baseline temperature reveals a complex processing of "cold" and "warm" inputs in thermal perception. The results of the TGI experiment support the assumption that sex differences in cold-related thermoreception are responsible for sex differences in the TGI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Margaret; Spurlock, C. Anna; Yang, Hung-Chia
The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.
Determination of simple thresholds for accelerometry-based parameters for fall detection.
Kangas, Maarit; Konttila, Antti; Winblad, Ilkka; Jämsä, Timo
2007-01-01
The increasing population of elderly people is mainly living in a home-dwelling environment and needs applications to support their independency and safety. Falls are one of the major health risks that affect the quality of life among older adults. Body attached accelerometers have been used to detect falls. The placement of the accelerometric sensor as well as the fall detection algorithms are still under investigation. The aim of the present pilot study was to determine acceleration thresholds for fall detection, using triaxial accelerometric measurements at the waist, wrist, and head. Intentional falls (forward, backward, and lateral) and activities of daily living (ADL) were performed by two voluntary subjects. The results showed that measurements from the waist and head have potential to distinguish between falls and ADL. Especially, when the simple threshold-based detection was combined with posture detection after the fall, the sensitivity and specificity of fall detection were up to 100 %. On the contrary, the wrist did not appear to be an optimal site for fall detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachman, D., E-mail: bachman@ualberta.ca; Fedosejevs, R.; Tsui, Y. Y.
An optical damage threshold for crystalline silicon from single femtosecond laser pulses was determined by detecting a permanent change in the refractive index of the material. This index change could be detected with unprecedented sensitivity by measuring the resonant wavelength shift of silicon integrated optics microring resonators irradiated with femtosecond laser pulses at 400 nm and 800 nm wavelengths. The threshold for permanent index change at 400 nm wavelength was determined to be 0.053 ± 0.007 J/cm{sup 2}, which agrees with previously reported threshold values for femtosecond laser modification of crystalline silicon. However, the threshold for index change at 800 nm wavelength was found to be 0.044 ± 0.005 J/cm{supmore » 2}, which is five times lower than the previously reported threshold values for visual change on the silicon surface. The discrepancy is attributed to possible modification of the crystallinity of silicon below the melting temperature that has not been detected before.« less
Wiegand, U K; Zhdanov, A; Stammwitz, E; Crozier, I; Claessens, R J; Meier, J; Bos, R J; Bode, F; Potratz, J
1999-06-01
The aim of this multicenter study was to investigate the performance of a new cardiac pacemaker lead with a titanium nitride cathode coated with a copolymer membrane. In particular, the electrophysiological effect of steroid dissolved in this ion-exchange membrane was evaluated by randomized comparison. Ninety-five patients were randomized either to the 1450 T (n = 51) or the 1451 T ventricular lead (n = 45) and received telemeteral VVI(R) pacemakers with identical diagnostic features. Both leads were bipolar, were passively affixed, and had a porous titanium nitride tip with a surface area of 3.5 mm2. The only difference between the two electrodes was 13 micrograms of dexamethasone added to the 1450 Ts membrane coating. Voltage thresholds (VTH) at pulse durations of 0.25, 0.37, and 0.5 ms, lead impedance, and sensing thresholds were measured at discharge, 2 weeks, 1 month, 3 months, and 6 months after implantation. Mean amplitude and the slew rate from three telemetered intracardiac electrograms, chronaxie-rheobase product, and minimum energy consumption were calculated. After a 6-month follow-up, mean voltage thresholds of 0.65 +/- 0.20 V and 0.63 +/- 0.34 were achieved for the 1450 T lead and 1451 T lead, respectively. As a result, a VTH < 1.0 V was obtained in all patients with 1450 T electrodes and in 97.7% of patients with 1451 T leads after 6 months follow-up. In both electrodes, stable VTH was reached 2 weeks after implantation, and no transient rise in threshold was observed. No differences were observed between the steroid and the nonsteroid group in respect to VTH, chronaxie-rheobase product, minimum energy consumption, and potential amplitude and slew rate. In conclusion, safe and efficient pacing at low pulse amplitudes were achieved with both leads. The tip design, independently of the steroid additive, prevented any energy-consuming increases in the voltage threshold.
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fujiwara, Masami
2007-09-01
Viability status of populations is a commonly used measure for decision-making in the management of populations. One of the challenges faced by managers is the need to consistently allocate management effort among populations. This allocation should in part be based on comparison of extinction risks among populations. Unfortunately, common criteria that use minimum viable population size or count-based population viability analysis (PVA) often do not provide results that are comparable among populations, primarily because they lack consistency in determining population size measures and threshold levels of population size (e.g., minimum viable population size and quasi-extinction threshold). Here I introduce a new index called the "extinction-effective population index," which accounts for differential effects of demographic stochasticity among organisms with different life-history strategies and among individuals in different life stages. This index is expected to become a new way of determining minimum viable population size criteria and also complement the count-based PVA. The index accounts for the difference in life-history strategies of organisms, which are modeled using matrix population models. The extinction-effective population index, sensitivity, and elasticity are demonstrated in three species of Pacific salmonids. The interpretation of the index is also provided by comparing them with existing demographic indices. Finally, a measure of life-history-specific effect of demographic stochasticity is derived.
Predictive minimum description length principle approach to inferring gene regulatory networks.
Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping
2011-01-01
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.
Evidence for the contribution of a threshold retrieval process to semantic memory.
Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A
2017-10-01
It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.
Detection Thresholds of Falling Snow from Satellite-Borne Active and Passive Sensors
NASA Technical Reports Server (NTRS)
Jackson, Gail
2012-01-01
Precipitation, including rain and snow, is a critical part of the Earth's energy and hydrology cycles. In order to collect information on the complete global precipitation cycle and to understand the energy budget in terms of precipitation, uniform global estimates of both liquid and frozen precipitation must be collected. Active observations of falling snow are somewhat easier to estimate since the radar will detect the precipitation particles and one only needs to know surface temperature to determine if it is liquid rain or snow. The challenges of estimating falling snow from passive spaceborne observations still exist though progress is being made. While these challenges are still being addressed, knowledge of their impact on expected retrieval results is an important key for understanding falling snow retrieval estimations. Important information to assess falling snow retrievals includes knowing thresholds of detection for active and passive sensors, various sensor channel configurations, snow event system characteristics, snowflake particle assumptions, and surface types. For example, can a lake effect snow system with low (2.5 km) cloud tops having an ice water content (Iwe) at the surface of 0.25 g m-3 and dendrite snowflakes be detected? If this information is known, we can focus retrieval efforts on detectable storms and concentrate advances on achievable results. Here, the focus is to determine thresholds of detection for falling snow for various snow conditions over land and lake surfaces. The analysis relies on simulated Weather Research Forecasting (WRF) simulations of falling snow cases since simulations provide all the information to determine the measurements from space and the ground truth. Results are presented for active radar at Ku, Ka, and W-band and for passive radiometer channels from 10 to 183 GHz (Skofronick-Jackson, et al. submitted to IEEE TGRS, April 2012). The notable results show: (1) the W-Band radar has detection thresholds more than an order of magnitude lower than the future GPM sensors, (2) the cloud structure macrophysics influences the thresholds of detection for passive channels, (3) the snowflake microphysics plays a large role in the detection threshold for active and passive instruments, (4) with reasonable assumptions, the passive 166 GHz channel has detection threshold values comparable to the GPM DPR Ku and Ka band radars with 0.05 g m-3 detected at the surface, or an 0.5-1 mm hr-l melted snow rate (equivalent to 0.5-2 cm hr-l solid fluffy snowflake rate).
NASA Astrophysics Data System (ADS)
Cheng, Siyang; An, Xingqin; Zhou, Lingxi; Tans, Pieter P.; Jacobson, Andy
2017-06-01
In order to explore where the source and sink have the greatest impact on CO2 background concentration at Waliguan (WLG) station, a statistical method is here proposed to calculate the representative source-sink region. The key to this method is to find the best footprint threshold, and the study is carried out in four parts. Firstly, transport climatology, expressed by total monthly footprint, was simulated by FLEXPART on a 7-day time scale. Surface CO2 emissions in Eurasia frequently transported to WLG station. WLG station was mainly influenced by the westerlies in winter and partly controlled by the Southeast Asian monsoon in summer. Secondly, CO2 concentrations, simulated by CT2015, were processed and analyzed through data quality control, screening, fitting and comparing. CO2 concentrations displayed obvious seasonal variation, with the maximum and minimum concentration appearing in April and August, respectively. The correlation of CO2 fitting background concentrations was R2 = 0.91 between simulation and observation. The temporal patterns were mainly correlated with CO2 exchange of biosphere-atmosphere, human activities and air transport. Thirdly, for the monthly CO2 fitting background concentrations from CT2015, a best footprint threshold was found based on correlation analysis and numerical iteration using the data of footprints and emissions. The grid cells where monthly footprints were greater than the best footprint threshold were the best threshold area corresponding to representative source-sink region. The representative source-sink region of maximum CO2 concentration in April was primarily located in Qinghai province, but the minimum CO2 concentration in August was mainly influenced by emissions in a wider region. Finally, we briefly presented the CO2 source-sink characteristics in the best threshold area. Generally, the best threshold area was a carbon sink. The major source and sink were relatively weak owing to less human activities and vegetation types in this high altitude area. CO2 concentrations were more influenced by human activities when air mass passed through many urban areas in summer. Therefore, the combination of footprints and emissions is an effective approach for assessing the source-sink region representativeness of CO2 background concentration.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.
Threshold quantum cryptography
NASA Astrophysics Data System (ADS)
Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki
2005-01-01
We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.
Knox, Emily C L; Webb, Oliver J; Esliger, Dale W; Biddle, Stuart J H; Sherar, Lauren B
2014-04-01
The promotion of physical activity (PA) guidelines to the general public is an important issue that lacks empirical investigation. PA campaigns often feature participation thresholds that cite PA guidelines verbatim [e.g., 150 min/week moderate-to-vigorous physical activity (MVPA)]. Some campaigns instead prefer to use generic PA messages (e.g., do as much MVPA as possible). 'Thresholds' may disrupt understanding of the health benefits of modest PA participation. This study examined the perception of health benefits of PA after exposure to PA messages that did and did not contain a duration threshold. Brief structured interviews were conducted with a convenience sample of adults (n = 1100). Participants received a threshold message (150 min/week MVPA), a message that presented the threshold as a minimum; a generic message or no message. Participants rated perceived health effects of seven PA durations. One-way analyses of variance with post hoc tests for group differences were used to assess raw perception ratings for each duration of PA. Recipients of all three messages held more positive perceptions of >150 min/week of MVPA relative to those not receiving any message. For MVPA durations <150 min/week, the generic PA message group perceived the greatest health benefits. Those receiving the threshold message tended to have the least positive perceptions of durations <150 min/week. Threshold messages were associated with lower perceived health benefits for modest PA durations. Campaigns based on threshold messages may be limited when promoting small PA increases at a population level.
Tejani, Viral D; Abbas, Paul J; Brown, Carolyn J
This study investigates the relationship between electrophysiological and psychophysical measures of amplitude modulation (AM) detection. Prior studies have reported both measures of AM detection recorded separately from cochlear implant (CI) users and acutely deafened animals, but no study has made both measures in the same CI users. Animal studies suggest a progressive loss of high-frequency encoding as one ascends the auditory pathway from the auditory nerve to the cortex. Because the CI speech processor uses the envelope of an ongoing acoustic signal to modulate pulse trains that are subsequently delivered to the intracochlear electrodes, it is of interest to explore auditory nerve responses to modulated stimuli. In addition, psychophysical AM detection abilities have been correlated with speech perception outcomes. Thus, the goal was to explore how the auditory nerve responds to AM stimuli and to relate those physiologic measures to perception. Eight patients using Cochlear Ltd. Implants participated in this study. Electrically evoked compound action potentials (ECAPs) were recorded using a 4000 pps pulse train that was sinusoidally amplitude modulated at 125, 250, 500, and 1000 Hz rates. Responses were measured for each pulse over at least one modulation cycle for an apical, medial, and basal electrode. Psychophysical modulation detection thresholds (MDTs) were also measured via a three-alternative forced choice, two-down, one-up adaptive procedure using the same modulation frequencies and electrodes. ECAPs were recorded from individual pulses in the AM pulse train. ECAP amplitudes varied sinusoidally, reflecting the sinusoidal variation in the stimulus. A modulated response amplitude (MRA) metric was calculated as the difference in the maximal and minimum ECAP amplitudes over the modulation cycles. MRA increased as modulation frequency increased, with no apparent cutoff (up to 1000 Hz). In contrast, MDTs increased as the modulation frequency increased. This trend is inconsistent with the physiologic measures. For a fixed modulation frequency, correlations were observed between MDTs and MRAs; this trend was evident at all frequencies except 1000 Hz (although only statistically significant for 250 and 500 Hz AM rates), possibly an indication of central limitations in processing of high modulation frequencies. Finally, peripheral responses were larger and psychophysical thresholds were lower in the apical electrodes relative to basal and medial electrodes, which may reflect better cochlear health and neural survival evidenced by lower preoperative low-frequency audiometric thresholds and steeper growth of neural responses in ECAP amplitude growth functions for apical electrodes. Robust ECAPs were recorded for all modulation frequencies tested. ECAP amplitudes varied sinusoidally, reflecting the periodicity of the modulated stimuli. MRAs increased as the modulation frequency increased, a trend we attribute to neural adaptation. For low modulation frequencies, there are multiple current steps between the peak and valley of the modulation cycle, which means successive stimuli are more similar to one another and neural responses are more likely to adapt. Higher MRAs were correlated with lower psychophysical thresholds at low modulation frequencies but not at 1000 Hz, implying a central limitation to processing of modulated stimuli.
Liu, Chang; Jin, Su-Hyun
2015-11-01
This study investigated whether native listeners processed speech differently from non-native listeners in a speech detection task. Detection thresholds of Mandarin Chinese and Korean vowels and non-speech sounds in noise, frequency selectivity, and the nativeness of Mandarin Chinese and Korean vowels were measured for Mandarin Chinese- and Korean-native listeners. The two groups of listeners exhibited similar non-speech sound detection and frequency selectivity; however, the Korean listeners had better detection thresholds of Korean vowels than Chinese listeners, while the Chinese listeners performed no better at Chinese vowel detection than the Korean listeners. Moreover, thresholds predicted from an auditory model highly correlated with behavioral thresholds of the two groups of listeners, suggesting that detection of speech sounds not only depended on listeners' frequency selectivity, but also might be affected by their native language experience. Listeners evaluated their native vowels with higher nativeness scores than non-native listeners. Native listeners may have advantages over non-native listeners when processing speech sounds in noise, even without the required phonetic processing; however, such native speech advantages might be offset by Chinese listeners' lower sensitivity to vowel sounds, a characteristic possibly resulting from their sparse vowel system and their greater cognitive and attentional demands for vowel processing.
Notched-noise precursors improve detection of low-frequency amplitude modulationa)
Almishaal, Ali; Bidelman, Gavin M.; Jennings, Skyler G.
2017-01-01
Amplitude modulation (AM) detection was measured with a short (50 ms), high-frequency carrier as a function of carrier level (Experiment I) and modulation frequency (Experiment II) for conditions with or without a notched-noise precursor. A longer carrier (500 ms) was also included in Experiment I. When the carrier was preceded by silence (no precursor condition) AM detection thresholds worsened for moderate-level carriers compared to lower- or higher-level carriers, resulting in a “mid-level hump.” AM detection thresholds with a precursor were better than those without a precursor, primarily for moderate-to-high level carriers, thus eliminating the mid-level hump in AM detection. When the carrier was 500 ms, AM thresholds improved by a constant (across all levels) relative to AM thresholds with a precursor, consistent with the longer carrier providing more “looks” to detect the AM signal. Experiment II revealed that improved AM detection with compared to without a precursor is limited to low-modulation frequencies (<60 Hz). These results are consistent with (1) a reduction in cochlear gain over the course of the precursor perhaps via the medial olivocochlear reflex or (2) a form of perceptual enhancement which may be mediated by adaptation of inhibition. PMID:28147582
Fottrell, E; Byass, P
2009-02-01
Effective early warning systems of humanitarian crises may help to avert substantial increases in mortality and morbidity, and prevent major population movements. The Butajira Rural Health Programme (BRHP) in Ethiopia has maintained a programme of epidemiological surveillance since 1987. Inspection of the BRHP data revealed large peaks of mortality in 1998 and 1999, well in excess of the normally observed year-to-year variation. Further investigation and enquiry revealed that these peaks related to a measles epidemic, and a serious episode of drought and consequent food insecurity that went undetected by the BRHP. This paper applies international humanitarian crisis threshold definitions to the BRHP data in an attempt to identify suitable mortality thresholds that may be used for the prospective detection of humanitarian crises in population surveillance sites in developing countries. Empirical investigation using secondary analysis of longitudinal population-based cohort data. The daily, weekly and monthly thresholds for crises in Butajira were applied to mortality data for the 5-year period incorporating the crisis periods of 1998-1999. Days, weeks and months in which mortality exceeded each threshold level were identified. Each threshold level was assessed in terms of prospectively identifying the true crisis periods in a timely manner whilst avoiding false alarms. The daily threshold definition is too sensitive to accurately detect impending or real crises in the population surveillance setting of the BRHP. However, the weekly threshold level is useful in identifying important increases in mortality in a timely manner without the excessive sensitivity of the daily threshold. The weekly threshold level detects the crisis periods approximately 2 weeks before the monthly threshold level. Mortality measures are highly specific indicators of the health status of populations, and simple procedures can be used to apply international crisis threshold definitions in population surveillance settings for the prospective detection of important changes in mortality rate. Standards for the timely use of surveillance data and ethical responsibilities of those responsible for the data should be made explicit to improve the public health functioning of current sentinel surveillance methodologies.
NASA Astrophysics Data System (ADS)
Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande
2017-11-01
This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.
Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.
Sutton, J A; Gillin, W P; Grattan, T J; Clarke, G D; Kilminster, S G
2002-01-01
Aims To discover whether a new infra-red laser method could detect a change in pain threshold after as mild an analgesic as paracetamol and whether an effervescent liquid formulation produced a faster onset of action than tablets. Methods This double-blind, placebo controlled randomized study used a portable, infra-red laser to measure ‘first pain’ thresholds on the nondominant forearm in 12 normal volunteers before and after 1 g of paracetamol or placebo. The mean of six recordings was determined three times before dosing, the first being used as a familiarization procedure, and 14 times after dosing. Results We detected a small (2%), statistically significant difference in pain threshold between a liquid formulation of paracetamol and placebo at 30 and 60 min (P = 0.004 and P = 0.001), but not between tablets and placebo. Liquid also increased the threshold significantly compared with tablets at 60 min (P = 0.01). Conclusions To detect such a small increase in pain threshold requires a highly consistent measure and the coefficient of variation was 2% for the study overall, surprisingly low for a subjective phenomenon. The reasons for this include minimizing reflectance by blacking the skin, using a nonhairy site, averaging six data points at each sample time and controlling closely the ambient conditions and the subjects’ preparation for studies. PMID:11849194
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Near-threshold fatigue crack behaviour in EUROFER 97 at different temperatures
NASA Astrophysics Data System (ADS)
Aktaa, J.; Lerch, M.
2006-07-01
The fatigue crack behaviour in EUROFER 97 was investigated at room temperature (RT), 300, 500 and 550 °C for the assessment of cracks in first wall structures built from EUROFER 97 of future fusion reactors. For this purpose, fatigue crack growth tests were performed using CT specimens with two R-ratios, R = 0.1 and R = 0.5 ( R is the load ratio with R = Fmin/ Fmax where Fmin and Fmax are the minimum and maximum applied loads within a cycle, respectively). Hence, fatigue crack threshold, fatigue crack growth behaviour in the near-threshold range and their dependences on temperature and R-ratio were determined and described using an analytical formula. The fatigue crack threshold showed a monotonous dependence on temperature which is for R = 0.5 insignificantly small. The fatigue crack growth behaviour exhibited for R = 0.1 a non-monotonous dependence on temperature which is explained by the decrease of yield stress and the increase of creep damage with increasing temperature.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Pairing call-response surveys and distance sampling for a mammalian carnivore
Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.
2015-01-01
Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.
MacNeilage, Paul R.; Turner, Amanda H.
2010-01-01
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843
Do Atmospheric Rivers explain the extreme precipitation events over East Asia?
NASA Astrophysics Data System (ADS)
Dairaku, K.; Nayak, S.
2017-12-01
Extreme precipitation events are now of serious concern due to their damaging societal impacts over last few decades. Thus, climate indices are widely used to identify and quantify variability and changes in particular aspects of the climate system, especially when considering extremes. In our study, we focus on few climate indices of annual precipitation extremes for the period 1979-2013 over East Asia to discuss some straightforward information and interpretation of certain aspects of extreme precipitation events that occur over the region. To do so, we first discuss different percentiles of precipitation and maximum length of wet spell with different thresholds from a regional climate model (NRAMS) simulation at 20km. Results indicate that the 99 percentile of precipitation events correspond to about 80mm/d over few regions of East Asia during 1979-2013 and maximum length of wet spell with minimum 20mm precipitation corresponds to about 10days (Figure 1). We then linked the extreme precipitation events with the intense moisture transport events associated with atmospheric rivers (ARs). The ARs are identified by computing the vertically integrated horizontal water vapor transport (IVT) between 1000hpa and 300hpa with IVT ≥ 250 kg/m/s and 2000 km minimum long. With this threshold and condition (set by previous research), our results indicate that some extreme propitiation events are associated with some ARs over East Asia, while some events are not associated with any ARs. Similarly, some ARs are associated with some extreme precipitation events, while some ARs are not associated with any events. Since the ARs are sensitive to the threshold and condition depending on region, so we will analyze the characteristics of ARs (frequency, duration, and annual variability) with different thresholds and discuss their relationship with extreme precipitation events over East Asia.
Impact of muon detection thresholds on the separability of primary cosmic rays
NASA Astrophysics Data System (ADS)
Müller, S.; Engel, R.; Pierog, T.; Roth, M.
2018-01-01
Knowledge of the mass composition of cosmic rays in the transition region of galactic to extragalactic cosmic rays is needed to discriminate different astrophysical models on their origin, acceleration, and propagation. An important observable to separate different mass groups of cosmic rays is the number of muons in extensive air showers. We performed a CORSIKA simulation study to analyze the impact of the detection threshold of muons on the separation quality of different primary cosmic rays in the energy region of the ankle. Using only the number of muons as the composition-sensitive observable, we find a clear dependence of the separation power on the detection threshold for ideal measurements. Although the number of detected muons increases when lowering the threshold, the discrimination power is reduced. If statistical fluctuations for muon detectors of limited size are taken into account, the threshold dependence remains qualitatively the same for small distances to the shower core but is reduced for large core distances. We interpret the impact of the detection threshold of muons on the composition sensitivity in terms of a change of the correlation of the number of muons nμ with the shower maximum Xmax as function of the muon energy as a result of the underlying hadronic interactions and the shower geometry. We further investigate the role of muons produced in a shower by photon-air interactions and conclude that, in addition to the effect of the nμ -Xmax correlation, the separability of primaries is reduced as a consequence of the presence of more muons from photonuclear reactions in proton than in iron showers.
Detection of short-term changes in vegetation cover by use of LANDSAT imagery. [Arizona
NASA Technical Reports Server (NTRS)
Turner, R. M. (Principal Investigator); Wiseman, F. M.
1975-01-01
The author has identified the following significant results. By using a constant band 6 to band 5 radiance ratio of 1.25, the changing pattern of areas of relatively dense vegetation cover was detected for the semiarid region in the vicinity of Tucson, Arizona. Electronically produced binary thematic masks were used to map areas with dense vegetation. The foliar cover threshold represented by the ratio was not accurately determined but field measurements show that the threshold lies in the range of 10 to 25 percent foliage cover. Montana evergreen forests with constant dense cover were correctly shown to exceed the threshold on all dates. The summer active grassland exceeded the threshold in the summer unless rainfall was insufficient. Desert areas exceeded the threshold during the spring of 1973 following heavy rains; the same areas during the rainless spring of 1974 did not exceed threshold. Irrigated fields, parks, golf courses, and riparian communities were among the habitats most frequently surpassing the threshold.
Ultrasensitivity and sharp threshold theorems for multisite systems
NASA Astrophysics Data System (ADS)
Dougoud, M.; Mazza, C.; Vinckenbosch, L.
2017-02-01
This work studies the ultrasensitivity of multisite binding processes where ligand molecules can bind to several binding sites. It considers more particularly recent models involving complex chemical reactions in allosteric phosphorylation processes and for transcription factors and nucleosomes competing for binding on DNA. New statistics-based formulas for the Hill coefficient and the effective Hill coefficient are provided and necessary conditions for a system to be ultrasensitive are exhibited. It is first shown that the ultrasensitivity of binding processes can be approached using sharp-threshold theorems which have been developed in applied probability theory and statistical mechanics for studying sharp threshold phenomena in reliability theory, random graph theory and percolation theory. Special classes of binding process are then introduced and are described as density dependent birth and death process. New precise large deviation results for the steady state distribution of the process are obtained, which permits to show that switch-like ultrasensitive responses are strongly related to the multi-modality of the steady state distribution. Ultrasensitivity occurs if and only if the entropy of the dynamical system has more than one global minimum for some critical ligand concentration. In this case, the Hill coefficient is proportional to the number of binding sites, and the system is highly ultrasensitive. The classical effective Hill coefficient I is extended to a new cooperativity index I q , for which we recommend the computation of a broad range of values of q instead of just the standard one I = I 0.9 corresponding to the 10%-90% variation in the dose-response. It is shown that this single choice can sometimes mislead the conclusion by not detecting ultrasensitivity. This new approach allows a better understanding of multisite ultrasensitive systems and provides new tools for the design of such systems.
64 x 64 thresholding photodetector array for optical pattern recognition
NASA Astrophysics Data System (ADS)
Langenbacher, Harry; Chao, Tien-Hsin; Shaw, Timothy; Yu, Jeffrey W.
1993-10-01
A high performance 32 X 32 peak detector array is introduced. This detector consists of a 32 X 32 array of thresholding photo-transistor cells, manufactured with a standard MOSIS digital 2-micron CMOS process. A built-in thresholding function that is able to perform 1024 thresholding operations in parallel strongly distinguishes this chip from available CCD detectors. This high speed detector offers responses from one to 10 milliseconds that is much higher than the commercially available CCD detectors operating at a TV frame rate. The parallel multiple peaks thresholding detection capability makes it particularly suitable for optical correlator and optoelectronically implemented neural networks. The principle of operation, circuit design and the performance characteristics are described. Experimental demonstration of correlation peak detection is also provided. Recently, we have also designed and built an advanced version of a 64 X 64 thresholding photodetector array chip. Experimental investigation of using this chip for pattern recognition is ongoing.
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia
The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
The simple ears of noctuoid moths are tuned to the calls of their sympatric bat community.
ter Hofstede, Hannah M; Goerlitz, Holger R; Ratcliffe, John M; Holderied, Marc W; Surlykke, Annemarie
2013-11-01
Insects with bat-detecting ears are ideal animals for investigating sensory system adaptations to predator cues. Noctuid moths have two auditory receptors (A1 and A2) sensitive to the ultrasonic echolocation calls of insectivorous bats. Larger moths are detected at greater distances by bats than smaller moths. Larger moths also have lower A1 best thresholds, allowing them to detect bats at greater distances and possibly compensating for their increased conspicuousness. Interestingly, the sound frequency at the lowest threshold is lower in larger than in smaller moths, suggesting that the relationship between threshold and size might vary across frequencies used by different bat species. Here, we demonstrate that the relationships between threshold and size in moths were only significant at some frequencies, and these frequencies differed between three locations (UK, Canada and Denmark). The relationships were more likely to be significant at call frequencies used by proportionately more bat species in the moths' specific bat community, suggesting an association between the tuning of moth ears and the cues provided by sympatric predators. Additionally, we found that the best threshold and best frequency of the less sensitive A2 receptor are also related to size, and that these relationships hold when controlling for evolutionary relationships. The slopes of best threshold versus size differ, however, such that the difference in threshold between A1 and A2 is greater for larger than for smaller moths. The shorter time from A1 to A2 excitation in smaller than in larger moths could potentially compensate for shorter absolute detection distances in smaller moths.
The ship edge feature detection based on high and low threshold for remote sensing image
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Shengyang
2018-05-01
In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.
VizieR Online Data Catalog: X-ray sources in the AKARI NEP deep field (Krumpe+, 2015)
NASA Astrophysics Data System (ADS)
Krumpe, M.; Miyaji, T.; Brunner, H.; Hanami, H.; Ishigaki, T.; Takagi, T.; Markowitz, A. G.; Goto, T.; Malkan, M. A.; Matsuhara, H.; Pearson, C.; Ueda, Y.; Wada, T.
2015-06-01
The fits images labelled SeMap* are the sensitivity maps in which we give the minimum flux that would have caused a detection at each position. This flux depends on the maximum likelihood threshold chosen in the source detection run, the point spread function, and the background level at the chosen position. We create sensitivity maps in different energy bands (0.5-2, 0.5-7, 2-4, 2-7, and 4-7keV) by searching for the flux to reject the null-hypothesis that the flux at a given position is only caused by a background fluctuation. In a chosen energy band, we determine for each position in the survey the flux required to obtain a certain Poisson probability above the background counts. Since ML=-ln(P), we know from our ML=12 threshold the probability we are aiming for. In practice, we search for a value of -ln P_total that falls within Delta ML=+/-0.2 of our targeted ML threshold. This tolerance range corresponds to having one spurious source more or less in the whole survey. Note, that outside the deep Subaru/Suprime-Cam imaging the sensitivity maps should be used with caution since we assume for their generation a ML=12 over the whole area covered by Chandra. More details on the procedure of producing the sensitivity maps, including the PSF-summed background map and PSF-weighted averaged exposure maps are given in the paper, section 5.3. The fits images labelled u90* are the upper limit maps, where the upper 90 per cent confidence flux limit is given at each position. We take a Bayesian approach following Kraft, Burrows & Nousek, 1991ApJ...374..344K. Consequently, we obtain the upper 90~per cent confidence flux limit by searching for the flux such that given the observed counts the Bayesian probability of having this flux or larger is 10~per cent. More details on the procedure of producing the upper 90 per cent flux limit maps are given in the paper, section 5.4. (6 data files).
Strategies for Early Outbreak Detection of Malaria in the Amhara Region of Ethiopia
NASA Astrophysics Data System (ADS)
Nekorchuk, D.; Gebrehiwot, T.; Mihretie, A.; Awoke, W.; Wimberly, M. C.
2017-12-01
Traditional epidemiological approaches to early detection of disease outbreaks are based on relatively straightforward thresholds (e.g. 75th percentile, standard deviations) estimated from historical case data. For diseases with strong seasonality, these can be modified to create separate thresholds for each seasonal time step. However, for disease processes that are non-stationary, more sophisticated techniques are needed to more accurately estimate outbreak threshold values. Early detection for geohealth-related diseases that also have environmental drivers, such as vector-borne diseases, may also benefit from the integration of time-lagged environmental data and disease ecology models into the threshold calculations. The Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) project has been integrating malaria case surveillance with remotely-sensed environmental data for early detection, warning, and forecasting of malaria epidemics in the Amhara region of Ethiopia, and has five years of weekly time series data from 47 woredas (districts). Efforts to reduce the burden of malaria in Ethiopia has been met with some notable success in the past two decades with major reduction in cases and deaths. However, malaria remains a significant public health threat as 60% of the population live in malarious areas, and due to the seasonal and unstable transmission patterns with cyclic outbreaks, protective immunity is generally low which could cause high morbidity and mortality during the epidemics. This study compared several approaches for defining outbreak thresholds and for identifying a potential outbreak based on deviations from these thresholds. We found that model-based approaches that accounted for climate-driven seasonality in malaria transmission were most effective, and that incorporating a trend component improved outbreak detection in areas with active malaria elimination efforts. An advantage of these early detection techniques is that they can detect climate-driven outbreaks as well as outbreaks driven by social factors such as human migration.
ERIC Educational Resources Information Center
Staklis, Sandra; Klein, Steve
2014-01-01
The "Carl D. Perkins Career and Technical Education Act of 2006" ("Perkins IV") sets a minimum allocation requirement that secondary and postsecondary career and technical education (CTE) subgrantees must achieve to receive federal financing. An eligible recipient with an allocation below the funding threshold may obtain a…
Social Mobility and Reproduction for Whom? College Readiness and First-Year Retention
ERIC Educational Resources Information Center
DeAngelo, Linda; Franke, Ray
2016-01-01
Completing college is now the minimum threshold for entry into the middle class. This has pushed college readiness issues to the forefront in efforts to increase educational attainment. Little is known about how college readiness improves outcomes for students traditionally marginalized in educational settings or if social background factors…
The Role of the Geographic Combatant Commander in Counterproliferation of Nuclear Weapons
2007-04-05
nuclear yield is beyond the scope of this paper; however, this value is above the 20% threshold of HEU meaning that at a minimum HEU is required for... introversion has created an environment where communication and cooperation is almost impossible. This inability to cooperate goes beyond relations between
USDA-ARS?s Scientific Manuscript database
The successful establishment or failure of a new population is often attributed to propagule pressure, the combination of the number of independent introduction events, and the number of individuals released at each event. Design of optimal release strategies for biological control agents benefits f...
Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
Marshall, John W; Dahlstrom, Dean B; Powley, Kramer D
2011-06-01
To satisfy the Criminal Code of Canada's definition of a firearm, a barreled weapon must be capable of causing serious bodily injury or death to a person. Canadian courts have accepted the forensically established criteria of "penetration or rupture of an eye" as serious bodily injury. The minimal velocity of nonconventional ammunition required to penetrate the eye including airsoft projectiles has yet to be established. To establish minimal threshold requirements for eye penetration, empirical tests were conducted using a variety of airsoft projectiles. Using the data obtained from these tests, and previous research using "air gun" projectiles, an "energy density" parameter was calculated for the minimum penetration threshold of an eye. Airsoft guns capable of achieving velocities in excess of 99 m/s (325 ft/s) using conventional 6-mm airsoft ammunition will satisfy the forensically established criteria of "serious bodily injury." The energy density parameter for typical 6-mm plastic airsoft projectiles is 4.3 to 4.8 J/cm². This calculation also encompasses 4.5-mm steel BBs.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
Chromatic Perceptual Learning but No Category Effects without Linguistic Input.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Aging: Sensitivity versus Criterion in Taste Perception.
ERIC Educational Resources Information Center
Kushnir, T.; Shapira, N.
1983-01-01
Employed the signal-detection paradigm as a model for investigating age-related biological versus cognitive effects on perceptual behavior. Old and young subjects reported the presence or absence of sugar in threshold level solutions and tap water. Older subjects displayed a higher detection threshold and obtained a stricter criterion of decision.…
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.